Feb 19 03:02:56.043727 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 19 03:02:56.925759 master-0 kubenswrapper[4169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:02:56.925759 master-0 kubenswrapper[4169]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 03:02:56.925759 master-0 kubenswrapper[4169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:02:56.925759 master-0 kubenswrapper[4169]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:02:56.925759 master-0 kubenswrapper[4169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 03:02:56.926777 master-0 kubenswrapper[4169]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:02:56.926777 master-0 kubenswrapper[4169]: I0219 03:02:56.926539 4169 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 03:02:56.932877 master-0 kubenswrapper[4169]: W0219 03:02:56.932836 4169 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:02:56.932877 master-0 kubenswrapper[4169]: W0219 03:02:56.932870 4169 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932883 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932894 4169 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932903 4169 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932914 4169 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932923 4169 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932932 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932941 4169 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932949 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932957 4169 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:02:56.932959 master-0 kubenswrapper[4169]: W0219 03:02:56.932965 4169 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.932974 4169 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.932982 4169 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.932990 4169 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.932998 4169 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933005 4169 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933014 4169 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933021 4169 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933029 4169 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933037 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933044 4169 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933052 4169 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933060 4169 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933067 4169 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933075 4169 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933085 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933093 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933101 4169 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933108 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933116 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:02:56.933279 master-0 kubenswrapper[4169]: W0219 03:02:56.933123 4169 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933133 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933142 4169 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933152 4169 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933162 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933170 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933178 4169 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933187 4169 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933195 4169 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933203 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933212 4169 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933220 4169 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933230 4169 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933239 4169 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933248 4169 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933281 4169 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933289 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933299 4169 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933307 4169 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:02:56.933850 master-0 kubenswrapper[4169]: W0219 03:02:56.933315 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933324 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933335 4169 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933345 4169 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933354 4169 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933362 4169 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933369 4169 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933377 4169 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933384 4169 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933392 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933400 4169 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933408 4169 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933416 4169 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933423 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933431 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933439 4169 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933446 4169 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933454 4169 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933461 4169 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933469 4169 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933477 4169 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:02:56.934491 master-0 kubenswrapper[4169]: W0219 03:02:56.933484 4169 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933634 4169 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933651 4169 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933664 4169 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933678 4169 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933690 4169 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933700 4169 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933712 4169 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933723 4169 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933733 4169 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933742 4169 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933752 4169 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933761 4169 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933770 4169 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933779 4169 flags.go:64] FLAG: --cgroup-root="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933787 4169 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933796 4169 flags.go:64] FLAG: --client-ca-file="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933805 4169 flags.go:64] FLAG: --cloud-config="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933814 4169 flags.go:64] FLAG: --cloud-provider="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933823 4169 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933834 4169 flags.go:64] FLAG: --cluster-domain="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933843 4169 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933852 4169 flags.go:64] FLAG: --config-dir="" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933861 4169 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933871 4169 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 03:02:56.935065 master-0 kubenswrapper[4169]: I0219 03:02:56.933883 4169 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933892 4169 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933901 4169 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933911 4169 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933919 4169 flags.go:64] FLAG: --contention-profiling="false" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933930 4169 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933939 4169 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933948 4169 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933958 4169 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933968 4169 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933978 4169 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933986 4169 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.933995 4169 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934004 4169 flags.go:64] FLAG: --enable-server="true" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934013 4169 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934025 4169 flags.go:64] FLAG: --event-burst="100" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934034 4169 flags.go:64] FLAG: --event-qps="50" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934043 4169 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934053 4169 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934062 4169 flags.go:64] FLAG: --eviction-hard="" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934072 4169 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934082 4169 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934091 4169 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934100 4169 flags.go:64] FLAG: --eviction-soft="" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934109 4169 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 03:02:56.935816 master-0 kubenswrapper[4169]: I0219 03:02:56.934118 4169 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934127 4169 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934136 4169 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934144 4169 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934153 4169 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934162 4169 flags.go:64] FLAG: --feature-gates="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934172 4169 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934181 4169 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934191 4169 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934199 4169 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934208 4169 flags.go:64] FLAG: --healthz-port="10248" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934218 4169 flags.go:64] FLAG: --help="false" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934227 4169 flags.go:64] FLAG: --hostname-override="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934235 4169 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934244 4169 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934276 4169 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934286 4169 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934294 4169 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934303 4169 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934312 4169 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934329 4169 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934338 4169 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934347 4169 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934357 4169 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934365 4169 flags.go:64] FLAG: --kube-reserved="" Feb 19 03:02:56.936550 master-0 kubenswrapper[4169]: I0219 03:02:56.934374 4169 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934384 4169 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934394 4169 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934402 4169 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934410 4169 flags.go:64] FLAG: --lock-file="" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934419 4169 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934428 4169 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934438 4169 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934451 4169 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934460 4169 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934469 4169 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934477 4169 flags.go:64] FLAG: --logging-format="text" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934486 4169 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934496 4169 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934505 4169 flags.go:64] FLAG: --manifest-url="" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934514 4169 flags.go:64] FLAG: --manifest-url-header="" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934525 4169 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934535 4169 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934546 4169 flags.go:64] FLAG: --max-pods="110" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934555 4169 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934564 4169 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934603 4169 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934615 4169 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934625 4169 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 03:02:56.937246 master-0 kubenswrapper[4169]: I0219 03:02:56.934635 4169 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934643 4169 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934664 4169 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934675 4169 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934685 4169 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934694 4169 flags.go:64] FLAG: --pod-cidr="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934703 4169 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934718 4169 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934726 4169 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934736 4169 flags.go:64] FLAG: --pods-per-core="0" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934744 4169 flags.go:64] FLAG: --port="10250" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934753 4169 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934763 4169 flags.go:64] FLAG: --provider-id="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934772 4169 flags.go:64] FLAG: --qos-reserved="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934782 4169 flags.go:64] FLAG: --read-only-port="10255" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934791 4169 flags.go:64] FLAG: --register-node="true" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934800 4169 flags.go:64] FLAG: --register-schedulable="true" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934808 4169 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934822 4169 flags.go:64] FLAG: --registry-burst="10" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934831 4169 flags.go:64] FLAG: --registry-qps="5" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934840 4169 flags.go:64] FLAG: --reserved-cpus="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934849 4169 flags.go:64] FLAG: --reserved-memory="" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934860 4169 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934869 4169 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 03:02:56.937922 master-0 kubenswrapper[4169]: I0219 03:02:56.934878 4169 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934886 4169 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934895 4169 flags.go:64] FLAG: --runonce="false" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934904 4169 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934912 4169 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934922 4169 flags.go:64] FLAG: --seccomp-default="false" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934931 4169 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934940 4169 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934949 4169 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934958 4169 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934967 4169 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934977 4169 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934988 4169 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.934997 4169 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935006 4169 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935015 4169 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935024 4169 flags.go:64] FLAG: --system-cgroups="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935033 4169 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935048 4169 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935056 4169 flags.go:64] FLAG: --tls-cert-file="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935064 4169 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935076 4169 flags.go:64] FLAG: --tls-min-version="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935085 4169 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935094 4169 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935104 4169 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 03:02:56.938623 master-0 kubenswrapper[4169]: I0219 03:02:56.935114 4169 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: I0219 03:02:56.935123 4169 flags.go:64] FLAG: --v="2" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: I0219 03:02:56.935140 4169 flags.go:64] FLAG: --version="false" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: I0219 03:02:56.935151 4169 flags.go:64] FLAG: --vmodule="" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: I0219 03:02:56.935161 4169 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: I0219 03:02:56.935171 4169 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935448 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935462 4169 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935474 4169 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935483 4169 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935491 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935501 4169 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935511 4169 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935520 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935529 4169 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935537 4169 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935545 4169 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935553 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935561 4169 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935572 4169 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:02:56.939402 master-0 kubenswrapper[4169]: W0219 03:02:56.935580 4169 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935587 4169 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935595 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935603 4169 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935611 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935619 4169 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935627 4169 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935635 4169 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935643 4169 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935651 4169 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935658 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935666 4169 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935674 4169 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935682 4169 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935690 4169 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935698 4169 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935707 4169 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935715 4169 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935723 4169 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935731 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:02:56.939976 master-0 kubenswrapper[4169]: W0219 03:02:56.935738 4169 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935746 4169 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935753 4169 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935762 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935770 4169 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935778 4169 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935785 4169 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935793 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935801 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935809 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935817 4169 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935827 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935835 4169 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935843 4169 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935851 4169 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935859 4169 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935867 4169 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935877 4169 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935886 4169 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935895 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:02:56.940757 master-0 kubenswrapper[4169]: W0219 03:02:56.935903 4169 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935911 4169 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935919 4169 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935928 4169 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935936 4169 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935944 4169 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935951 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935959 4169 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935967 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935974 4169 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935982 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.935990 4169 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936001 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936009 4169 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936019 4169 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936030 4169 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936038 4169 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:02:56.941550 master-0 kubenswrapper[4169]: W0219 03:02:56.936046 4169 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:02:56.942040 master-0 kubenswrapper[4169]: I0219 03:02:56.936854 4169 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: I0219 03:02:56.945664 4169 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: I0219 03:02:56.945699 4169 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945793 4169 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945802 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945808 4169 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945814 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945819 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945825 4169 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945830 4169 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945835 4169 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945840 4169 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945844 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945849 4169 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945854 4169 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945859 4169 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945864 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945869 4169 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945873 4169 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945878 4169 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:02:56.947178 master-0 kubenswrapper[4169]: W0219 03:02:56.945882 4169 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945887 4169 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945892 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945897 4169 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945902 4169 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945906 4169 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945911 4169 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945916 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945920 4169 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945926 4169 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945933 4169 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945938 4169 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945943 4169 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945947 4169 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945952 4169 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945957 4169 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945962 4169 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945972 4169 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945978 4169 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:02:56.947877 master-0 kubenswrapper[4169]: W0219 03:02:56.945985 4169 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.945990 4169 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.945995 4169 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946000 4169 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946006 4169 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946010 4169 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946015 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946020 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946025 4169 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946029 4169 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946034 4169 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946039 4169 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946043 4169 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946048 4169 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946052 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946056 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946061 4169 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946065 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946070 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946074 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:02:56.948558 master-0 kubenswrapper[4169]: W0219 03:02:56.946079 4169 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946084 4169 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946090 4169 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946096 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946103 4169 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946109 4169 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946114 4169 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946120 4169 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946124 4169 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946129 4169 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946134 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946139 4169 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946143 4169 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946148 4169 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946153 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:02:56.949272 master-0 kubenswrapper[4169]: W0219 03:02:56.946158 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: I0219 03:02:56.946167 4169 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946345 4169 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946357 4169 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946365 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946371 4169 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946376 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946380 4169 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946386 4169 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946391 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946396 4169 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946401 4169 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946406 4169 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946411 4169 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946416 4169 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:02:56.949724 master-0 kubenswrapper[4169]: W0219 03:02:56.946421 4169 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946426 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946431 4169 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946435 4169 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946440 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946445 4169 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946450 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946454 4169 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946461 4169 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946467 4169 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946472 4169 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946477 4169 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946482 4169 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946487 4169 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946491 4169 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946495 4169 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946500 4169 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946505 4169 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946510 4169 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946514 4169 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:02:56.950168 master-0 kubenswrapper[4169]: W0219 03:02:56.946520 4169 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946525 4169 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946530 4169 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946534 4169 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946539 4169 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946543 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946548 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946745 4169 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946749 4169 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946754 4169 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946758 4169 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946763 4169 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946767 4169 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946772 4169 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946776 4169 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946780 4169 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946785 4169 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946789 4169 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946793 4169 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946798 4169 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:02:56.950767 master-0 kubenswrapper[4169]: W0219 03:02:56.946802 4169 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946808 4169 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946815 4169 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946820 4169 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946825 4169 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946831 4169 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946837 4169 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946843 4169 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946849 4169 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946854 4169 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946859 4169 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946864 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946868 4169 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946873 4169 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946878 4169 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946883 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946889 4169 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946894 4169 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:02:56.951363 master-0 kubenswrapper[4169]: W0219 03:02:56.946898 4169 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:02:56.951868 master-0 kubenswrapper[4169]: I0219 03:02:56.946905 4169 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:02:56.951868 master-0 kubenswrapper[4169]: I0219 03:02:56.948375 4169 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 03:02:56.951868 master-0 kubenswrapper[4169]: I0219 03:02:56.951333 4169 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 19 03:02:56.953161 master-0 kubenswrapper[4169]: I0219 03:02:56.953124 4169 server.go:997] "Starting client certificate rotation" Feb 19 03:02:56.953212 master-0 kubenswrapper[4169]: I0219 03:02:56.953168 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 03:02:56.953395 master-0 kubenswrapper[4169]: I0219 03:02:56.953351 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 03:02:56.981167 master-0 kubenswrapper[4169]: I0219 03:02:56.981093 4169 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:02:56.984633 master-0 kubenswrapper[4169]: I0219 03:02:56.984595 4169 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:02:56.985350 master-0 kubenswrapper[4169]: E0219 03:02:56.985294 4169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:57.002709 master-0 kubenswrapper[4169]: I0219 03:02:57.002639 4169 log.go:25] "Validated CRI v1 runtime API" Feb 19 03:02:57.007802 master-0 kubenswrapper[4169]: I0219 03:02:57.007385 4169 log.go:25] "Validated CRI v1 image API" Feb 19 03:02:57.009899 master-0 kubenswrapper[4169]: I0219 03:02:57.009814 4169 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 03:02:57.017907 master-0 kubenswrapper[4169]: I0219 03:02:57.017850 4169 fs.go:135] Filesystem UUIDs: map[4837cee5-4017-4a37-b994-9fb38a99ee26:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 19 03:02:57.017907 master-0 kubenswrapper[4169]: I0219 03:02:57.017885 4169 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 19 03:02:57.041937 master-0 kubenswrapper[4169]: I0219 03:02:57.041569 4169 manager.go:217] Machine: {Timestamp:2026-02-19 03:02:57.03900091 +0000 UTC m=+0.785192695 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4d28ab4c6c14d45b3b826d1d7d6a246 SystemUUID:e4d28ab4-c6c1-4d45-b3b8-26d1d7d6a246 BootID:81756ef7-a125-45a3-9659-4adc79f47dc0 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:80:8b:c0 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bd:d1:82 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:7e:bd:f6:a4:63:b0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 03:02:57.041937 master-0 kubenswrapper[4169]: I0219 03:02:57.041869 4169 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 03:02:57.042211 master-0 kubenswrapper[4169]: I0219 03:02:57.042019 4169 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 03:02:57.043195 master-0 kubenswrapper[4169]: I0219 03:02:57.043154 4169 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 03:02:57.043636 master-0 kubenswrapper[4169]: I0219 03:02:57.043577 4169 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 03:02:57.043882 master-0 kubenswrapper[4169]: I0219 03:02:57.043621 4169 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 03:02:57.043967 master-0 kubenswrapper[4169]: I0219 03:02:57.043889 4169 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 03:02:57.043967 master-0 kubenswrapper[4169]: I0219 03:02:57.043903 4169 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 03:02:57.044664 master-0 kubenswrapper[4169]: I0219 03:02:57.044629 4169 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:02:57.044726 master-0 kubenswrapper[4169]: I0219 03:02:57.044666 4169 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:02:57.044830 master-0 kubenswrapper[4169]: I0219 03:02:57.044799 4169 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:02:57.044944 master-0 kubenswrapper[4169]: I0219 03:02:57.044912 4169 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 03:02:57.049227 master-0 kubenswrapper[4169]: I0219 03:02:57.049191 4169 kubelet.go:418] "Attempting to sync node with API server" Feb 19 03:02:57.049227 master-0 kubenswrapper[4169]: I0219 03:02:57.049217 4169 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 03:02:57.049405 master-0 kubenswrapper[4169]: I0219 03:02:57.049236 4169 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 03:02:57.049405 master-0 kubenswrapper[4169]: I0219 03:02:57.049252 4169 kubelet.go:324] "Adding apiserver pod source" Feb 19 03:02:57.049405 master-0 kubenswrapper[4169]: I0219 03:02:57.049298 4169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 03:02:57.054786 master-0 kubenswrapper[4169]: I0219 03:02:57.054733 4169 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 19 03:02:57.055997 master-0 kubenswrapper[4169]: W0219 03:02:57.055930 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:57.056071 master-0 kubenswrapper[4169]: E0219 03:02:57.056003 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:57.056071 master-0 kubenswrapper[4169]: W0219 03:02:57.055990 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:57.056200 master-0 kubenswrapper[4169]: E0219 03:02:57.056086 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:57.057606 master-0 kubenswrapper[4169]: I0219 03:02:57.057571 4169 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 03:02:57.057774 master-0 kubenswrapper[4169]: I0219 03:02:57.057743 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 03:02:57.057774 master-0 kubenswrapper[4169]: I0219 03:02:57.057774 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057784 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057794 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057802 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057809 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057818 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057826 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057835 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057843 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 03:02:57.057878 master-0 kubenswrapper[4169]: I0219 03:02:57.057871 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 03:02:57.058345 master-0 kubenswrapper[4169]: I0219 03:02:57.058327 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 03:02:57.058409 master-0 kubenswrapper[4169]: I0219 03:02:57.058370 4169 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 03:02:57.059063 master-0 kubenswrapper[4169]: I0219 03:02:57.059024 4169 server.go:1280] "Started kubelet" Feb 19 03:02:57.060368 master-0 kubenswrapper[4169]: I0219 03:02:57.060236 4169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 03:02:57.060368 master-0 kubenswrapper[4169]: I0219 03:02:57.060252 4169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 03:02:57.061147 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 19 03:02:57.065248 master-0 kubenswrapper[4169]: I0219 03:02:57.065094 4169 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 19 03:02:57.066445 master-0 kubenswrapper[4169]: I0219 03:02:57.066358 4169 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 03:02:57.067160 master-0 kubenswrapper[4169]: I0219 03:02:57.067052 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:57.071375 master-0 kubenswrapper[4169]: I0219 03:02:57.071338 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 03:02:57.071433 master-0 kubenswrapper[4169]: I0219 03:02:57.071416 4169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 03:02:57.073743 master-0 kubenswrapper[4169]: I0219 03:02:57.073674 4169 server.go:449] "Adding debug handlers to kubelet server" Feb 19 03:02:57.073946 master-0 kubenswrapper[4169]: I0219 03:02:57.073906 4169 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 03:02:57.073946 master-0 kubenswrapper[4169]: I0219 03:02:57.073943 4169 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 03:02:57.074102 master-0 kubenswrapper[4169]: I0219 03:02:57.074075 4169 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 19 03:02:57.074222 master-0 kubenswrapper[4169]: E0219 03:02:57.074177 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:02:57.074328 master-0 kubenswrapper[4169]: I0219 03:02:57.074289 4169 reconstruct.go:97] "Volume reconstruction finished" Feb 19 03:02:57.074431 master-0 kubenswrapper[4169]: I0219 03:02:57.074406 4169 reconciler.go:26] "Reconciler: start to sync state" Feb 19 03:02:57.074431 master-0 kubenswrapper[4169]: E0219 03:02:57.074328 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 19 03:02:57.076309 master-0 kubenswrapper[4169]: W0219 03:02:57.076177 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:57.076389 master-0 kubenswrapper[4169]: E0219 03:02:57.076338 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:57.077096 master-0 kubenswrapper[4169]: E0219 03:02:57.074210 4169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189586bd89ccba72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,LastTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:02:57.077431 master-0 kubenswrapper[4169]: E0219 03:02:57.077385 4169 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 19 03:02:57.081006 master-0 kubenswrapper[4169]: I0219 03:02:57.080979 4169 factory.go:55] Registering systemd factory Feb 19 03:02:57.081006 master-0 kubenswrapper[4169]: I0219 03:02:57.081009 4169 factory.go:221] Registration of the systemd container factory successfully Feb 19 03:02:57.081494 master-0 kubenswrapper[4169]: I0219 03:02:57.081464 4169 factory.go:153] Registering CRI-O factory Feb 19 03:02:57.081562 master-0 kubenswrapper[4169]: I0219 03:02:57.081497 4169 factory.go:221] Registration of the crio container factory successfully Feb 19 03:02:57.081624 master-0 kubenswrapper[4169]: I0219 03:02:57.081602 4169 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 03:02:57.081667 master-0 kubenswrapper[4169]: I0219 03:02:57.081641 4169 factory.go:103] Registering Raw factory Feb 19 03:02:57.081667 master-0 kubenswrapper[4169]: I0219 03:02:57.081661 4169 manager.go:1196] Started watching for new ooms in manager Feb 19 03:02:57.082393 master-0 kubenswrapper[4169]: I0219 03:02:57.082366 4169 manager.go:319] Starting recovery of all containers Feb 19 03:02:57.104791 master-0 kubenswrapper[4169]: I0219 03:02:57.104737 4169 manager.go:324] Recovery completed Feb 19 03:02:57.117645 master-0 kubenswrapper[4169]: I0219 03:02:57.117623 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.119609 master-0 kubenswrapper[4169]: I0219 03:02:57.119549 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.119678 master-0 kubenswrapper[4169]: I0219 03:02:57.119626 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.119678 master-0 kubenswrapper[4169]: I0219 03:02:57.119647 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.120802 master-0 kubenswrapper[4169]: I0219 03:02:57.120761 4169 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 03:02:57.120802 master-0 kubenswrapper[4169]: I0219 03:02:57.120792 4169 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 03:02:57.120890 master-0 kubenswrapper[4169]: I0219 03:02:57.120832 4169 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:02:57.127095 master-0 kubenswrapper[4169]: I0219 03:02:57.127072 4169 policy_none.go:49] "None policy: Start" Feb 19 03:02:57.127777 master-0 kubenswrapper[4169]: I0219 03:02:57.127739 4169 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 03:02:57.127777 master-0 kubenswrapper[4169]: I0219 03:02:57.127765 4169 state_mem.go:35] "Initializing new in-memory state store" Feb 19 03:02:57.174992 master-0 kubenswrapper[4169]: E0219 03:02:57.174538 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:02:57.185034 master-0 kubenswrapper[4169]: I0219 03:02:57.184998 4169 manager.go:334] "Starting Device Plugin manager" Feb 19 03:02:57.185034 master-0 kubenswrapper[4169]: I0219 03:02:57.185045 4169 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.185058 4169 server.go:79] "Starting device plugin registration server" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.185740 4169 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.185799 4169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.186000 4169 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.186086 4169 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.186115 4169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: E0219 03:02:57.187946 4169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.223366 4169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.225713 4169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.225784 4169 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: I0219 03:02:57.225817 4169 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: E0219 03:02:57.226079 4169 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: W0219 03:02:57.227212 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:57.228558 master-0 kubenswrapper[4169]: E0219 03:02:57.227352 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:57.275766 master-0 kubenswrapper[4169]: E0219 03:02:57.275657 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 19 03:02:57.286790 master-0 kubenswrapper[4169]: I0219 03:02:57.286696 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.287931 master-0 kubenswrapper[4169]: I0219 03:02:57.287875 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.287931 master-0 kubenswrapper[4169]: I0219 03:02:57.287915 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.287931 master-0 kubenswrapper[4169]: I0219 03:02:57.287926 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.288193 master-0 kubenswrapper[4169]: I0219 03:02:57.288007 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:02:57.288844 master-0 kubenswrapper[4169]: E0219 03:02:57.288783 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:02:57.327030 master-0 kubenswrapper[4169]: I0219 03:02:57.326931 4169 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 19 03:02:57.327030 master-0 kubenswrapper[4169]: I0219 03:02:57.327015 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.328193 master-0 kubenswrapper[4169]: I0219 03:02:57.328120 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.328193 master-0 kubenswrapper[4169]: I0219 03:02:57.328190 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.328444 master-0 kubenswrapper[4169]: I0219 03:02:57.328214 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.328444 master-0 kubenswrapper[4169]: I0219 03:02:57.328426 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.328692 master-0 kubenswrapper[4169]: I0219 03:02:57.328633 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.328692 master-0 kubenswrapper[4169]: I0219 03:02:57.328687 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.329559 master-0 kubenswrapper[4169]: I0219 03:02:57.329512 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.329559 master-0 kubenswrapper[4169]: I0219 03:02:57.329539 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.329559 master-0 kubenswrapper[4169]: I0219 03:02:57.329548 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.329771 master-0 kubenswrapper[4169]: I0219 03:02:57.329603 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.329771 master-0 kubenswrapper[4169]: I0219 03:02:57.329610 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.329771 master-0 kubenswrapper[4169]: I0219 03:02:57.329629 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.329771 master-0 kubenswrapper[4169]: I0219 03:02:57.329638 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.329978 master-0 kubenswrapper[4169]: I0219 03:02:57.329845 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.329978 master-0 kubenswrapper[4169]: I0219 03:02:57.329910 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.330578 master-0 kubenswrapper[4169]: I0219 03:02:57.330535 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.330578 master-0 kubenswrapper[4169]: I0219 03:02:57.330570 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.330578 master-0 kubenswrapper[4169]: I0219 03:02:57.330581 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.330775 master-0 kubenswrapper[4169]: I0219 03:02:57.330692 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.331007 master-0 kubenswrapper[4169]: I0219 03:02:57.330968 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.331007 master-0 kubenswrapper[4169]: I0219 03:02:57.331000 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.331136 master-0 kubenswrapper[4169]: I0219 03:02:57.330994 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.331136 master-0 kubenswrapper[4169]: I0219 03:02:57.331117 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.331136 master-0 kubenswrapper[4169]: I0219 03:02:57.331138 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.332032 master-0 kubenswrapper[4169]: I0219 03:02:57.331978 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.332032 master-0 kubenswrapper[4169]: I0219 03:02:57.332021 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.332032 master-0 kubenswrapper[4169]: I0219 03:02:57.332036 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.332328 master-0 kubenswrapper[4169]: I0219 03:02:57.332032 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.332328 master-0 kubenswrapper[4169]: I0219 03:02:57.332186 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.332328 master-0 kubenswrapper[4169]: I0219 03:02:57.332206 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.332328 master-0 kubenswrapper[4169]: I0219 03:02:57.332280 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.332328 master-0 kubenswrapper[4169]: I0219 03:02:57.332333 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.332606 master-0 kubenswrapper[4169]: I0219 03:02:57.332368 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.333325 master-0 kubenswrapper[4169]: I0219 03:02:57.333281 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.333325 master-0 kubenswrapper[4169]: I0219 03:02:57.333310 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.333325 master-0 kubenswrapper[4169]: I0219 03:02:57.333321 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.333531 master-0 kubenswrapper[4169]: I0219 03:02:57.333454 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.333531 master-0 kubenswrapper[4169]: I0219 03:02:57.333503 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.333531 master-0 kubenswrapper[4169]: I0219 03:02:57.333528 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.333881 master-0 kubenswrapper[4169]: I0219 03:02:57.333818 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.333881 master-0 kubenswrapper[4169]: I0219 03:02:57.333880 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.335084 master-0 kubenswrapper[4169]: I0219 03:02:57.335004 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.335198 master-0 kubenswrapper[4169]: I0219 03:02:57.335093 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.335198 master-0 kubenswrapper[4169]: I0219 03:02:57.335129 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.375533 master-0 kubenswrapper[4169]: I0219 03:02:57.375366 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.375632 master-0 kubenswrapper[4169]: I0219 03:02:57.375572 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.375632 master-0 kubenswrapper[4169]: I0219 03:02:57.375607 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.375721 master-0 kubenswrapper[4169]: I0219 03:02:57.375631 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.476936 master-0 kubenswrapper[4169]: I0219 03:02:57.476778 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.476936 master-0 kubenswrapper[4169]: I0219 03:02:57.476833 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.476936 master-0 kubenswrapper[4169]: I0219 03:02:57.476851 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.476936 master-0 kubenswrapper[4169]: I0219 03:02:57.476945 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477236 master-0 kubenswrapper[4169]: I0219 03:02:57.477027 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.477236 master-0 kubenswrapper[4169]: I0219 03:02:57.477054 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477236 master-0 kubenswrapper[4169]: I0219 03:02:57.477143 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.477236 master-0 kubenswrapper[4169]: I0219 03:02:57.477206 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.477236 master-0 kubenswrapper[4169]: I0219 03:02:57.477233 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477282 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477316 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477345 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477368 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477394 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477404 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477445 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477460 master-0 kubenswrapper[4169]: I0219 03:02:57.477462 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477698 master-0 kubenswrapper[4169]: I0219 03:02:57.477480 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.477698 master-0 kubenswrapper[4169]: I0219 03:02:57.477496 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.477698 master-0 kubenswrapper[4169]: I0219 03:02:57.477527 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.477698 master-0 kubenswrapper[4169]: I0219 03:02:57.477536 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.489525 master-0 kubenswrapper[4169]: I0219 03:02:57.489445 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.490618 master-0 kubenswrapper[4169]: I0219 03:02:57.490578 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.490665 master-0 kubenswrapper[4169]: I0219 03:02:57.490633 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.490665 master-0 kubenswrapper[4169]: I0219 03:02:57.490645 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.490721 master-0 kubenswrapper[4169]: I0219 03:02:57.490707 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:02:57.491750 master-0 kubenswrapper[4169]: E0219 03:02:57.491707 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:02:57.578135 master-0 kubenswrapper[4169]: I0219 03:02:57.578036 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.578135 master-0 kubenswrapper[4169]: I0219 03:02:57.578132 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.578397 master-0 kubenswrapper[4169]: I0219 03:02:57.578182 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.578397 master-0 kubenswrapper[4169]: I0219 03:02:57.578206 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.578397 master-0 kubenswrapper[4169]: I0219 03:02:57.578226 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.578397 master-0 kubenswrapper[4169]: I0219 03:02:57.578245 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.578397 master-0 kubenswrapper[4169]: I0219 03:02:57.578350 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578411 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578445 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578449 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578486 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578505 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578527 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.578589 master-0 kubenswrapper[4169]: I0219 03:02:57.578549 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.578813 master-0 kubenswrapper[4169]: I0219 03:02:57.578593 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.578813 master-0 kubenswrapper[4169]: I0219 03:02:57.578560 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.579184 master-0 kubenswrapper[4169]: I0219 03:02:57.579138 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.579237 master-0 kubenswrapper[4169]: I0219 03:02:57.579206 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.579350 master-0 kubenswrapper[4169]: I0219 03:02:57.578874 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.579543 master-0 kubenswrapper[4169]: I0219 03:02:57.579431 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.579543 master-0 kubenswrapper[4169]: I0219 03:02:57.579529 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.579686 master-0 kubenswrapper[4169]: I0219 03:02:57.579630 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.579776 master-0 kubenswrapper[4169]: I0219 03:02:57.579737 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.580049 master-0 kubenswrapper[4169]: I0219 03:02:57.579884 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.580049 master-0 kubenswrapper[4169]: I0219 03:02:57.579952 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.580391 master-0 kubenswrapper[4169]: I0219 03:02:57.580183 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.678124 master-0 kubenswrapper[4169]: E0219 03:02:57.678018 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 19 03:02:57.678545 master-0 kubenswrapper[4169]: I0219 03:02:57.678473 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:02:57.678679 master-0 kubenswrapper[4169]: I0219 03:02:57.678486 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:02:57.698314 master-0 kubenswrapper[4169]: I0219 03:02:57.698244 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:02:57.719326 master-0 kubenswrapper[4169]: I0219 03:02:57.719236 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:02:57.726397 master-0 kubenswrapper[4169]: I0219 03:02:57.726361 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:02:57.892279 master-0 kubenswrapper[4169]: I0219 03:02:57.892186 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:57.893398 master-0 kubenswrapper[4169]: I0219 03:02:57.893341 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:57.893398 master-0 kubenswrapper[4169]: I0219 03:02:57.893398 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:57.893577 master-0 kubenswrapper[4169]: I0219 03:02:57.893426 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:57.893577 master-0 kubenswrapper[4169]: I0219 03:02:57.893506 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:02:57.894755 master-0 kubenswrapper[4169]: E0219 03:02:57.894677 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:02:58.069006 master-0 kubenswrapper[4169]: I0219 03:02:58.068902 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:58.176948 master-0 kubenswrapper[4169]: W0219 03:02:58.176782 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:58.176948 master-0 kubenswrapper[4169]: E0219 03:02:58.176864 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:58.225227 master-0 kubenswrapper[4169]: W0219 03:02:58.225132 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:58.225227 master-0 kubenswrapper[4169]: E0219 03:02:58.225209 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:58.326139 master-0 kubenswrapper[4169]: W0219 03:02:58.326019 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:58.326139 master-0 kubenswrapper[4169]: E0219 03:02:58.326139 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:58.479430 master-0 kubenswrapper[4169]: E0219 03:02:58.479367 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 19 03:02:58.522029 master-0 kubenswrapper[4169]: W0219 03:02:58.521982 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55 WatchSource:0}: Error finding container 45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55: Status 404 returned error can't find the container with id 45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55 Feb 19 03:02:58.525469 master-0 kubenswrapper[4169]: I0219 03:02:58.525443 4169 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:02:58.578245 master-0 kubenswrapper[4169]: W0219 03:02:58.578172 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:58.578443 master-0 kubenswrapper[4169]: E0219 03:02:58.578258 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:58.605315 master-0 kubenswrapper[4169]: W0219 03:02:58.605226 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ad9373c007a4fcd25e70622bdc8deb.slice/crio-c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149 WatchSource:0}: Error finding container c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149: Status 404 returned error can't find the container with id c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149 Feb 19 03:02:58.695439 master-0 kubenswrapper[4169]: I0219 03:02:58.695364 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:02:58.696843 master-0 kubenswrapper[4169]: I0219 03:02:58.696809 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:02:58.696896 master-0 kubenswrapper[4169]: I0219 03:02:58.696848 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:02:58.696896 master-0 kubenswrapper[4169]: I0219 03:02:58.696858 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:02:58.696955 master-0 kubenswrapper[4169]: I0219 03:02:58.696901 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:02:58.697840 master-0 kubenswrapper[4169]: E0219 03:02:58.697802 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:02:58.740959 master-0 kubenswrapper[4169]: W0219 03:02:58.740868 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f WatchSource:0}: Error finding container 10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f: Status 404 returned error can't find the container with id 10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f Feb 19 03:02:58.973500 master-0 kubenswrapper[4169]: W0219 03:02:58.973446 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1 WatchSource:0}: Error finding container 288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1: Status 404 returned error can't find the container with id 288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1 Feb 19 03:02:58.997090 master-0 kubenswrapper[4169]: I0219 03:02:58.996938 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 03:02:58.998817 master-0 kubenswrapper[4169]: E0219 03:02:58.998769 4169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:02:59.068578 master-0 kubenswrapper[4169]: I0219 03:02:59.068505 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:02:59.232525 master-0 kubenswrapper[4169]: I0219 03:02:59.232307 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"13103220887a41b425edd349c524421eaa06bddd41c4d0276cf0be744cde8eaf"} Feb 19 03:02:59.233457 master-0 kubenswrapper[4169]: I0219 03:02:59.233402 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f"} Feb 19 03:02:59.234563 master-0 kubenswrapper[4169]: I0219 03:02:59.234535 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149"} Feb 19 03:02:59.236071 master-0 kubenswrapper[4169]: I0219 03:02:59.236017 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55"} Feb 19 03:02:59.237068 master-0 kubenswrapper[4169]: I0219 03:02:59.237038 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1"} Feb 19 03:02:59.501424 master-0 kubenswrapper[4169]: E0219 03:02:59.501276 4169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189586bd89ccba72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,LastTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:00.069383 master-0 kubenswrapper[4169]: I0219 03:03:00.069241 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:00.080713 master-0 kubenswrapper[4169]: E0219 03:03:00.080645 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 19 03:03:00.089611 master-0 kubenswrapper[4169]: W0219 03:03:00.089557 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:00.089611 master-0 kubenswrapper[4169]: E0219 03:03:00.089614 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:00.298806 master-0 kubenswrapper[4169]: I0219 03:03:00.298743 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:00.299755 master-0 kubenswrapper[4169]: I0219 03:03:00.299723 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:00.299755 master-0 kubenswrapper[4169]: I0219 03:03:00.299752 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:00.299838 master-0 kubenswrapper[4169]: I0219 03:03:00.299765 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:00.299838 master-0 kubenswrapper[4169]: I0219 03:03:00.299824 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:00.300901 master-0 kubenswrapper[4169]: E0219 03:03:00.300814 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:03:00.565758 master-0 kubenswrapper[4169]: W0219 03:03:00.565702 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:00.565932 master-0 kubenswrapper[4169]: E0219 03:03:00.565763 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:00.570939 master-0 kubenswrapper[4169]: W0219 03:03:00.570880 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:00.571003 master-0 kubenswrapper[4169]: E0219 03:03:00.570948 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:01.069476 master-0 kubenswrapper[4169]: I0219 03:03:01.069389 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:01.283825 master-0 kubenswrapper[4169]: W0219 03:03:01.283713 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:01.283825 master-0 kubenswrapper[4169]: E0219 03:03:01.283787 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:02.068200 master-0 kubenswrapper[4169]: I0219 03:03:02.068151 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:02.244352 master-0 kubenswrapper[4169]: I0219 03:03:02.244228 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8"} Feb 19 03:03:03.068591 master-0 kubenswrapper[4169]: I0219 03:03:03.068517 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:03.248421 master-0 kubenswrapper[4169]: I0219 03:03:03.248369 4169 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8" exitCode=0 Feb 19 03:03:03.248421 master-0 kubenswrapper[4169]: I0219 03:03:03.248419 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8"} Feb 19 03:03:03.248659 master-0 kubenswrapper[4169]: I0219 03:03:03.248518 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:03.249915 master-0 kubenswrapper[4169]: I0219 03:03:03.249686 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 03:03:03.250047 master-0 kubenswrapper[4169]: I0219 03:03:03.249757 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:03.250101 master-0 kubenswrapper[4169]: I0219 03:03:03.250064 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:03.250101 master-0 kubenswrapper[4169]: I0219 03:03:03.250078 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:03.251022 master-0 kubenswrapper[4169]: E0219 03:03:03.250975 4169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:03.282447 master-0 kubenswrapper[4169]: E0219 03:03:03.282382 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 19 03:03:03.501745 master-0 kubenswrapper[4169]: I0219 03:03:03.501699 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:03.503377 master-0 kubenswrapper[4169]: I0219 03:03:03.503345 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:03.503445 master-0 kubenswrapper[4169]: I0219 03:03:03.503387 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:03.503445 master-0 kubenswrapper[4169]: I0219 03:03:03.503406 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:03.503520 master-0 kubenswrapper[4169]: I0219 03:03:03.503466 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:03.504190 master-0 kubenswrapper[4169]: E0219 03:03:03.504156 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:03:04.069046 master-0 kubenswrapper[4169]: I0219 03:03:04.068987 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:04.957886 master-0 kubenswrapper[4169]: W0219 03:03:04.957721 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:04.957886 master-0 kubenswrapper[4169]: E0219 03:03:04.957807 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:05.070009 master-0 kubenswrapper[4169]: I0219 03:03:05.069935 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:05.257651 master-0 kubenswrapper[4169]: I0219 03:03:05.257523 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 19 03:03:05.258105 master-0 kubenswrapper[4169]: I0219 03:03:05.258072 4169 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="e7152ef6d0229b7a284e7fcdac245a676684ec6e2db5abd55049fff47adf44b0" exitCode=1 Feb 19 03:03:05.258155 master-0 kubenswrapper[4169]: I0219 03:03:05.258108 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"e7152ef6d0229b7a284e7fcdac245a676684ec6e2db5abd55049fff47adf44b0"} Feb 19 03:03:05.258230 master-0 kubenswrapper[4169]: I0219 03:03:05.258198 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:05.259019 master-0 kubenswrapper[4169]: I0219 03:03:05.258985 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:05.259019 master-0 kubenswrapper[4169]: I0219 03:03:05.259012 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:05.259019 master-0 kubenswrapper[4169]: I0219 03:03:05.259021 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:05.259294 master-0 kubenswrapper[4169]: I0219 03:03:05.259249 4169 scope.go:117] "RemoveContainer" containerID="e7152ef6d0229b7a284e7fcdac245a676684ec6e2db5abd55049fff47adf44b0" Feb 19 03:03:05.398946 master-0 kubenswrapper[4169]: W0219 03:03:05.398846 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:05.399151 master-0 kubenswrapper[4169]: E0219 03:03:05.398954 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:05.733096 master-0 kubenswrapper[4169]: W0219 03:03:05.733018 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:05.733304 master-0 kubenswrapper[4169]: E0219 03:03:05.733116 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:06.067979 master-0 kubenswrapper[4169]: I0219 03:03:06.067853 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:06.608867 master-0 kubenswrapper[4169]: W0219 03:03:06.608776 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:06.609400 master-0 kubenswrapper[4169]: E0219 03:03:06.608870 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:03:07.068528 master-0 kubenswrapper[4169]: I0219 03:03:07.068477 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:07.189419 master-0 kubenswrapper[4169]: E0219 03:03:07.189370 4169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:03:08.067923 master-0 kubenswrapper[4169]: I0219 03:03:08.067872 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:03:08.265788 master-0 kubenswrapper[4169]: I0219 03:03:08.265495 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff"} Feb 19 03:03:08.265788 master-0 kubenswrapper[4169]: I0219 03:03:08.265543 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:08.266586 master-0 kubenswrapper[4169]: I0219 03:03:08.266532 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:08.266586 master-0 kubenswrapper[4169]: I0219 03:03:08.266564 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:08.266586 master-0 kubenswrapper[4169]: I0219 03:03:08.266574 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:08.267724 master-0 kubenswrapper[4169]: I0219 03:03:08.267678 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:08.267724 master-0 kubenswrapper[4169]: I0219 03:03:08.267688 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65"} Feb 19 03:03:08.267724 master-0 kubenswrapper[4169]: I0219 03:03:08.267728 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908"} Feb 19 03:03:08.268374 master-0 kubenswrapper[4169]: I0219 03:03:08.268336 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:08.268374 master-0 kubenswrapper[4169]: I0219 03:03:08.268364 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:08.268374 master-0 kubenswrapper[4169]: I0219 03:03:08.268375 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:08.269330 master-0 kubenswrapper[4169]: I0219 03:03:08.269283 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6"} Feb 19 03:03:08.270552 master-0 kubenswrapper[4169]: I0219 03:03:08.270512 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 19 03:03:08.271035 master-0 kubenswrapper[4169]: I0219 03:03:08.270999 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 19 03:03:08.271410 master-0 kubenswrapper[4169]: I0219 03:03:08.271370 4169 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0" exitCode=1 Feb 19 03:03:08.271410 master-0 kubenswrapper[4169]: I0219 03:03:08.271401 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0"} Feb 19 03:03:08.271557 master-0 kubenswrapper[4169]: I0219 03:03:08.271459 4169 scope.go:117] "RemoveContainer" containerID="e7152ef6d0229b7a284e7fcdac245a676684ec6e2db5abd55049fff47adf44b0" Feb 19 03:03:08.271557 master-0 kubenswrapper[4169]: I0219 03:03:08.271438 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:08.272320 master-0 kubenswrapper[4169]: I0219 03:03:08.272211 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:08.272320 master-0 kubenswrapper[4169]: I0219 03:03:08.272283 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:08.272320 master-0 kubenswrapper[4169]: I0219 03:03:08.272296 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:08.272589 master-0 kubenswrapper[4169]: I0219 03:03:08.272542 4169 scope.go:117] "RemoveContainer" containerID="b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0" Feb 19 03:03:08.272699 master-0 kubenswrapper[4169]: E0219 03:03:08.272672 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 19 03:03:08.273504 master-0 kubenswrapper[4169]: I0219 03:03:08.273437 4169 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd" exitCode=0 Feb 19 03:03:08.273607 master-0 kubenswrapper[4169]: I0219 03:03:08.273502 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd"} Feb 19 03:03:08.273607 master-0 kubenswrapper[4169]: I0219 03:03:08.273516 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:08.275539 master-0 kubenswrapper[4169]: I0219 03:03:08.274359 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:08.275539 master-0 kubenswrapper[4169]: I0219 03:03:08.274381 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:08.275539 master-0 kubenswrapper[4169]: I0219 03:03:08.274389 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:08.281763 master-0 kubenswrapper[4169]: I0219 03:03:08.281395 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:08.282494 master-0 kubenswrapper[4169]: I0219 03:03:08.282368 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:08.282494 master-0 kubenswrapper[4169]: I0219 03:03:08.282418 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:08.282494 master-0 kubenswrapper[4169]: I0219 03:03:08.282431 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:09.277666 master-0 kubenswrapper[4169]: I0219 03:03:09.277594 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 19 03:03:09.278225 master-0 kubenswrapper[4169]: I0219 03:03:09.278186 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:09.278961 master-0 kubenswrapper[4169]: I0219 03:03:09.278922 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:09.278961 master-0 kubenswrapper[4169]: I0219 03:03:09.278953 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:09.278961 master-0 kubenswrapper[4169]: I0219 03:03:09.278964 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:09.279287 master-0 kubenswrapper[4169]: I0219 03:03:09.279245 4169 scope.go:117] "RemoveContainer" containerID="b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0" Feb 19 03:03:09.279469 master-0 kubenswrapper[4169]: E0219 03:03:09.279433 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 19 03:03:09.279782 master-0 kubenswrapper[4169]: I0219 03:03:09.279750 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"d18413342a722838be3aeba368600d701226af1bb0655a2558eb4a099c9c2796"} Feb 19 03:03:09.279828 master-0 kubenswrapper[4169]: I0219 03:03:09.279793 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:09.279869 master-0 kubenswrapper[4169]: I0219 03:03:09.279845 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:09.280390 master-0 kubenswrapper[4169]: I0219 03:03:09.280347 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:09.280390 master-0 kubenswrapper[4169]: I0219 03:03:09.280388 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:09.280485 master-0 kubenswrapper[4169]: I0219 03:03:09.280401 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:09.280485 master-0 kubenswrapper[4169]: I0219 03:03:09.280441 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:09.280485 master-0 kubenswrapper[4169]: I0219 03:03:09.280461 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:09.280485 master-0 kubenswrapper[4169]: I0219 03:03:09.280471 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:09.904659 master-0 kubenswrapper[4169]: I0219 03:03:09.904624 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:09.915682 master-0 kubenswrapper[4169]: I0219 03:03:09.915644 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:09.915682 master-0 kubenswrapper[4169]: I0219 03:03:09.915676 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:09.915797 master-0 kubenswrapper[4169]: I0219 03:03:09.915687 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:09.915797 master-0 kubenswrapper[4169]: I0219 03:03:09.915743 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:10.204919 master-0 kubenswrapper[4169]: E0219 03:03:10.204730 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 19 03:03:10.205681 master-0 kubenswrapper[4169]: I0219 03:03:10.205611 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:10.205681 master-0 kubenswrapper[4169]: E0219 03:03:10.205633 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 03:03:10.205877 master-0 kubenswrapper[4169]: E0219 03:03:10.205742 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd89ccba72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,LastTimestamp:2026-02-19 03:02:57.058994802 +0000 UTC m=+0.805186537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.209666 master-0 kubenswrapper[4169]: E0219 03:03:10.208116 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.212933 master-0 kubenswrapper[4169]: E0219 03:03:10.212853 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.216625 master-0 kubenswrapper[4169]: E0219 03:03:10.216528 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.220075 master-0 kubenswrapper[4169]: E0219 03:03:10.219995 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd919088af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.189267631 +0000 UTC m=+0.935459366,LastTimestamp:2026-02-19 03:02:57.189267631 +0000 UTC m=+0.935459366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.223994 master-0 kubenswrapper[4169]: E0219 03:03:10.223921 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.28790014 +0000 UTC m=+1.034091875,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.227850 master-0 kubenswrapper[4169]: E0219 03:03:10.227790 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.287920662 +0000 UTC m=+1.034112397,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.231986 master-0 kubenswrapper[4169]: E0219 03:03:10.231898 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.287931892 +0000 UTC m=+1.034123627,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.236220 master-0 kubenswrapper[4169]: E0219 03:03:10.236152 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.328163053 +0000 UTC m=+1.074354828,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.240995 master-0 kubenswrapper[4169]: E0219 03:03:10.240934 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.328204656 +0000 UTC m=+1.074396431,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.245635 master-0 kubenswrapper[4169]: E0219 03:03:10.245567 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.328227197 +0000 UTC m=+1.074418972,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.251777 master-0 kubenswrapper[4169]: E0219 03:03:10.251662 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.329531895 +0000 UTC m=+1.075723630,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.255803 master-0 kubenswrapper[4169]: E0219 03:03:10.255722 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.329543896 +0000 UTC m=+1.075735631,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.260230 master-0 kubenswrapper[4169]: E0219 03:03:10.260147 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.329552817 +0000 UTC m=+1.075744552,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.264486 master-0 kubenswrapper[4169]: E0219 03:03:10.264357 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.329624122 +0000 UTC m=+1.075815857,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.268948 master-0 kubenswrapper[4169]: E0219 03:03:10.268867 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.329635352 +0000 UTC m=+1.075827077,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.273451 master-0 kubenswrapper[4169]: E0219 03:03:10.273203 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.329642893 +0000 UTC m=+1.075834628,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.277045 master-0 kubenswrapper[4169]: E0219 03:03:10.276957 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.330557335 +0000 UTC m=+1.076749090,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.283795 master-0 kubenswrapper[4169]: E0219 03:03:10.283671 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.330577276 +0000 UTC m=+1.076769021,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.289002 master-0 kubenswrapper[4169]: E0219 03:03:10.288876 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.330588507 +0000 UTC m=+1.076780242,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.294817 master-0 kubenswrapper[4169]: E0219 03:03:10.294674 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.331100781 +0000 UTC m=+1.077292546,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.300351 master-0 kubenswrapper[4169]: E0219 03:03:10.300202 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.331128253 +0000 UTC m=+1.077320008,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.307027 master-0 kubenswrapper[4169]: E0219 03:03:10.306863 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a5c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a5c04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119656964 +0000 UTC m=+0.865848709,LastTimestamp:2026-02-19 03:02:57.331149485 +0000 UTC m=+1.077341230,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.313208 master-0 kubenswrapper[4169]: E0219 03:03:10.313034 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d698938\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d698938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119603 +0000 UTC m=+0.865794745,LastTimestamp:2026-02-19 03:02:57.332006183 +0000 UTC m=+1.078197938,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.317601 master-0 kubenswrapper[4169]: E0219 03:03:10.317484 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189586bd8d6a19cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189586bd8d6a19cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:57.119640013 +0000 UTC m=+0.865831768,LastTimestamp:2026-02-19 03:02:57.332030234 +0000 UTC m=+1.078221989,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.322522 master-0 kubenswrapper[4169]: E0219 03:03:10.322430 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bde1347260 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:58.525409888 +0000 UTC m=+2.271601623,LastTimestamp:2026-02-19 03:02:58.525409888 +0000 UTC m=+2.271601623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.326675 master-0 kubenswrapper[4169]: E0219 03:03:10.326601 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586bde611ac3c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:58.60701702 +0000 UTC m=+2.353208755,LastTimestamp:2026-02-19 03:02:58.60701702 +0000 UTC m=+2.353208755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.330810 master-0 kubenswrapper[4169]: E0219 03:03:10.330663 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586bdee2de6af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:58.743084719 +0000 UTC m=+2.489276494,LastTimestamp:2026-02-19 03:02:58.743084719 +0000 UTC m=+2.489276494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.338729 master-0 kubenswrapper[4169]: E0219 03:03:10.338574 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189586bdf23cbc7b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:58.811165819 +0000 UTC m=+2.557357554,LastTimestamp:2026-02-19 03:02:58.811165819 +0000 UTC m=+2.557357554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.344112 master-0 kubenswrapper[4169]: E0219 03:03:10.343974 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586bdfc259f7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:02:58.977423229 +0000 UTC m=+2.723614964,LastTimestamp:2026-02-19 03:02:58.977423229 +0000 UTC m=+2.723614964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.352636 master-0 kubenswrapper[4169]: E0219 03:03:10.352455 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586be5252027e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 1.897s (1.897s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:00.423172734 +0000 UTC m=+4.169364509,LastTimestamp:2026-02-19 03:03:00.423172734 +0000 UTC m=+4.169364509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.357905 master-0 kubenswrapper[4169]: E0219 03:03:10.357765 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bea922b4ff openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:01.879690495 +0000 UTC m=+5.625882230,LastTimestamp:2026-02-19 03:03:01.879690495 +0000 UTC m=+5.625882230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.362986 master-0 kubenswrapper[4169]: E0219 03:03:10.362817 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bedd30d9c2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:02.753032642 +0000 UTC m=+6.499224377,LastTimestamp:2026-02-19 03:03:02.753032642 +0000 UTC m=+6.499224377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.368543 master-0 kubenswrapper[4169]: E0219 03:03:10.368428 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf36de4026 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.257568806 +0000 UTC m=+8.003760541,LastTimestamp:2026-02-19 03:03:04.257568806 +0000 UTC m=+8.003760541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.374166 master-0 kubenswrapper[4169]: E0219 03:03:10.374015 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf514834f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.700720374 +0000 UTC m=+8.446912109,LastTimestamp:2026-02-19 03:03:04.700720374 +0000 UTC m=+8.446912109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.400648 master-0 kubenswrapper[4169]: E0219 03:03:10.400522 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf529e9b18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.723159832 +0000 UTC m=+8.469351607,LastTimestamp:2026-02-19 03:03:04.723159832 +0000 UTC m=+8.469351607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.405201 master-0 kubenswrapper[4169]: E0219 03:03:10.405117 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf36de4026\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf36de4026 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.257568806 +0000 UTC m=+8.003760541,LastTimestamp:2026-02-19 03:03:07.267999424 +0000 UTC m=+11.014191159,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.411070 master-0 kubenswrapper[4169]: E0219 03:03:10.410986 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586bfebca0847 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 8.549s (8.549s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.292919879 +0000 UTC m=+11.039111644,LastTimestamp:2026-02-19 03:03:07.292919879 +0000 UTC m=+11.039111644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.419235 master-0 kubenswrapper[4169]: E0219 03:03:10.419113 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586bffa37dcc4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.534998724 +0000 UTC m=+11.281190499,LastTimestamp:2026-02-19 03:03:07.534998724 +0000 UTC m=+11.281190499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.423922 master-0 kubenswrapper[4169]: E0219 03:03:10.423798 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf514834f6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf514834f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.700720374 +0000 UTC m=+8.446912109,LastTimestamp:2026-02-19 03:03:07.537647726 +0000 UTC m=+11.283839491,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.430207 master-0 kubenswrapper[4169]: E0219 03:03:10.429944 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586bffb847e28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.556797992 +0000 UTC m=+11.302989737,LastTimestamp:2026-02-19 03:03:07.556797992 +0000 UTC m=+11.302989737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.435235 master-0 kubenswrapper[4169]: E0219 03:03:10.435084 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf529e9b18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf529e9b18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.723159832 +0000 UTC m=+8.469351607,LastTimestamp:2026-02-19 03:03:07.55866066 +0000 UTC m=+11.304852425,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.441017 master-0 kubenswrapper[4169]: E0219 03:03:10.440887 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586bffbb2089b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.559782555 +0000 UTC m=+11.305974330,LastTimestamp:2026-02-19 03:03:07.559782555 +0000 UTC m=+11.305974330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.446008 master-0 kubenswrapper[4169]: E0219 03:03:10.445854 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586bffe09a961 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 8.621s (8.621s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.599079777 +0000 UTC m=+11.345271532,LastTimestamp:2026-02-19 03:03:07.599079777 +0000 UTC m=+11.345271532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.450628 master-0 kubenswrapper[4169]: E0219 03:03:10.450496 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c0042c8e37 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.094s (9.094s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.702029879 +0000 UTC m=+11.448221654,LastTimestamp:2026-02-19 03:03:07.702029879 +0000 UTC m=+11.448221654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.455104 master-0 kubenswrapper[4169]: E0219 03:03:10.454765 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189586c00d3cee25 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.042s (9.042s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.854097957 +0000 UTC m=+11.600289692,LastTimestamp:2026-02-19 03:03:07.854097957 +0000 UTC m=+11.600289692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.459740 master-0 kubenswrapper[4169]: E0219 03:03:10.459616 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c00fea0609 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.898996233 +0000 UTC m=+11.645187958,LastTimestamp:2026-02-19 03:03:07.898996233 +0000 UTC m=+11.645187958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.463663 master-0 kubenswrapper[4169]: E0219 03:03:10.463572 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586c0100491d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.900735957 +0000 UTC m=+11.646927692,LastTimestamp:2026-02-19 03:03:07.900735957 +0000 UTC m=+11.646927692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.467591 master-0 kubenswrapper[4169]: E0219 03:03:10.467463 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c01008be94 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:07.901009556 +0000 UTC m=+11.647201291,LastTimestamp:2026-02-19 03:03:07.901009556 +0000 UTC m=+11.647201291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.473217 master-0 kubenswrapper[4169]: E0219 03:03:10.470971 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c0163fe9a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.005288358 +0000 UTC m=+11.751480083,LastTimestamp:2026-02-19 03:03:08.005288358 +0000 UTC m=+11.751480083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.474229 master-0 kubenswrapper[4169]: E0219 03:03:10.474128 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586c017afc478 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.029396088 +0000 UTC m=+11.775587823,LastTimestamp:2026-02-19 03:03:08.029396088 +0000 UTC m=+11.775587823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.477395 master-0 kubenswrapper[4169]: E0219 03:03:10.477320 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c017d2aab1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.031683249 +0000 UTC m=+11.777874984,LastTimestamp:2026-02-19 03:03:08.031683249 +0000 UTC m=+11.777874984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.480638 master-0 kubenswrapper[4169]: E0219 03:03:10.480557 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c017de0e2c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.032429612 +0000 UTC m=+11.778621347,LastTimestamp:2026-02-19 03:03:08.032429612 +0000 UTC m=+11.778621347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.483641 master-0 kubenswrapper[4169]: E0219 03:03:10.483561 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189586c01b6af58e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.091995534 +0000 UTC m=+11.838187269,LastTimestamp:2026-02-19 03:03:08.091995534 +0000 UTC m=+11.838187269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.486742 master-0 kubenswrapper[4169]: E0219 03:03:10.486658 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189586c01d50c680 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.123833984 +0000 UTC m=+11.870025719,LastTimestamp:2026-02-19 03:03:08.123833984 +0000 UTC m=+11.870025719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.490818 master-0 kubenswrapper[4169]: E0219 03:03:10.490735 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586c0262f8937 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.272650551 +0000 UTC m=+12.018842296,LastTimestamp:2026-02-19 03:03:08.272650551 +0000 UTC m=+12.018842296,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.494249 master-0 kubenswrapper[4169]: E0219 03:03:10.494174 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c026b4178d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.281337741 +0000 UTC m=+12.027529476,LastTimestamp:2026-02-19 03:03:08.281337741 +0000 UTC m=+12.027529476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.497784 master-0 kubenswrapper[4169]: E0219 03:03:10.497685 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c031334f81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.457447297 +0000 UTC m=+12.203639062,LastTimestamp:2026-02-19 03:03:08.457447297 +0000 UTC m=+12.203639062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.501604 master-0 kubenswrapper[4169]: E0219 03:03:10.501498 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c031d76679 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.468201081 +0000 UTC m=+12.214392816,LastTimestamp:2026-02-19 03:03:08.468201081 +0000 UTC m=+12.214392816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.516304 master-0 kubenswrapper[4169]: E0219 03:03:10.508781 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c031ea264f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.469429839 +0000 UTC m=+12.215621594,LastTimestamp:2026-02-19 03:03:08.469429839 +0000 UTC m=+12.215621594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:10.528350 master-0 kubenswrapper[4169]: E0219 03:03:10.528155 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586c0262f8937\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586c0262f8937 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.272650551 +0000 UTC m=+12.018842296,LastTimestamp:2026-02-19 03:03:09.279390011 +0000 UTC m=+13.025581746,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.075943 master-0 kubenswrapper[4169]: I0219 03:03:11.072557 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:11.500997 master-0 kubenswrapper[4169]: E0219 03:03:11.500788 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c0e61c683b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 3.46s (3.46s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.492622395 +0000 UTC m=+15.238814170,LastTimestamp:2026-02-19 03:03:11.492622395 +0000 UTC m=+15.238814170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.522954 master-0 kubenswrapper[4169]: E0219 03:03:11.522759 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c0e76f7007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 3.045s (3.045s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.514841095 +0000 UTC m=+15.261032870,LastTimestamp:2026-02-19 03:03:11.514841095 +0000 UTC m=+15.261032870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.639009 master-0 kubenswrapper[4169]: I0219 03:03:11.638926 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 03:03:11.656081 master-0 kubenswrapper[4169]: I0219 03:03:11.656027 4169 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 03:03:11.767250 master-0 kubenswrapper[4169]: E0219 03:03:11.766759 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c0f604b520 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.759504672 +0000 UTC m=+15.505696407,LastTimestamp:2026-02-19 03:03:11.759504672 +0000 UTC m=+15.505696407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.774732 master-0 kubenswrapper[4169]: E0219 03:03:11.774536 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c0f614520c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.760527884 +0000 UTC m=+15.506719649,LastTimestamp:2026-02-19 03:03:11.760527884 +0000 UTC m=+15.506719649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.815594 master-0 kubenswrapper[4169]: E0219 03:03:11.815344 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189586c0f90a1810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.810189328 +0000 UTC m=+15.556381093,LastTimestamp:2026-02-19 03:03:11.810189328 +0000 UTC m=+15.556381093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:11.820594 master-0 kubenswrapper[4169]: E0219 03:03:11.820446 4169 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189586c0f912fad2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:11.810771666 +0000 UTC m=+15.556963431,LastTimestamp:2026-02-19 03:03:11.810771666 +0000 UTC m=+15.556963431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:12.075232 master-0 kubenswrapper[4169]: I0219 03:03:12.075105 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:12.291121 master-0 kubenswrapper[4169]: I0219 03:03:12.291019 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a"} Feb 19 03:03:12.291121 master-0 kubenswrapper[4169]: I0219 03:03:12.291142 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:12.295595 master-0 kubenswrapper[4169]: I0219 03:03:12.295530 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:12.295595 master-0 kubenswrapper[4169]: I0219 03:03:12.295573 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:12.295595 master-0 kubenswrapper[4169]: I0219 03:03:12.295588 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:12.299098 master-0 kubenswrapper[4169]: I0219 03:03:12.299061 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"82a40f80e34c4f63706840b48b0aa48486b2ad68c13d50974f11a3442433c7ea"} Feb 19 03:03:12.299318 master-0 kubenswrapper[4169]: I0219 03:03:12.299183 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:12.300213 master-0 kubenswrapper[4169]: I0219 03:03:12.300145 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:12.300213 master-0 kubenswrapper[4169]: I0219 03:03:12.300173 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:12.300213 master-0 kubenswrapper[4169]: I0219 03:03:12.300184 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:13.075696 master-0 kubenswrapper[4169]: I0219 03:03:13.075616 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:13.167772 master-0 kubenswrapper[4169]: W0219 03:03:13.167710 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 19 03:03:13.167918 master-0 kubenswrapper[4169]: E0219 03:03:13.167792 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 19 03:03:13.301486 master-0 kubenswrapper[4169]: I0219 03:03:13.301400 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:13.301486 master-0 kubenswrapper[4169]: I0219 03:03:13.301474 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:13.302541 master-0 kubenswrapper[4169]: I0219 03:03:13.302481 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:13.302592 master-0 kubenswrapper[4169]: I0219 03:03:13.302545 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:13.302592 master-0 kubenswrapper[4169]: I0219 03:03:13.302585 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:13.302698 master-0 kubenswrapper[4169]: I0219 03:03:13.302671 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:13.302811 master-0 kubenswrapper[4169]: I0219 03:03:13.302795 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:13.302895 master-0 kubenswrapper[4169]: I0219 03:03:13.302882 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:13.903433 master-0 kubenswrapper[4169]: I0219 03:03:13.903370 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:03:14.075834 master-0 kubenswrapper[4169]: I0219 03:03:14.075683 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:14.303752 master-0 kubenswrapper[4169]: I0219 03:03:14.303649 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:14.304620 master-0 kubenswrapper[4169]: I0219 03:03:14.304598 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:14.304695 master-0 kubenswrapper[4169]: I0219 03:03:14.304629 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:14.304695 master-0 kubenswrapper[4169]: I0219 03:03:14.304645 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:14.334184 master-0 kubenswrapper[4169]: W0219 03:03:14.334114 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:14.334184 master-0 kubenswrapper[4169]: E0219 03:03:14.334170 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 19 03:03:15.074291 master-0 kubenswrapper[4169]: I0219 03:03:15.074194 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:15.361102 master-0 kubenswrapper[4169]: W0219 03:03:15.360952 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 19 03:03:15.361102 master-0 kubenswrapper[4169]: E0219 03:03:15.361022 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 19 03:03:15.411217 master-0 kubenswrapper[4169]: I0219 03:03:15.411112 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:03:15.411426 master-0 kubenswrapper[4169]: I0219 03:03:15.411376 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:15.412593 master-0 kubenswrapper[4169]: I0219 03:03:15.412540 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:15.412593 master-0 kubenswrapper[4169]: I0219 03:03:15.412589 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:15.412767 master-0 kubenswrapper[4169]: I0219 03:03:15.412607 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:15.415403 master-0 kubenswrapper[4169]: W0219 03:03:15.415289 4169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 19 03:03:15.415552 master-0 kubenswrapper[4169]: E0219 03:03:15.415398 4169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 19 03:03:15.418314 master-0 kubenswrapper[4169]: I0219 03:03:15.418228 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:03:16.073219 master-0 kubenswrapper[4169]: I0219 03:03:16.073120 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:16.307002 master-0 kubenswrapper[4169]: I0219 03:03:16.306949 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:16.308113 master-0 kubenswrapper[4169]: I0219 03:03:16.307854 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:16.308113 master-0 kubenswrapper[4169]: I0219 03:03:16.308083 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:16.308113 master-0 kubenswrapper[4169]: I0219 03:03:16.308099 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:16.312500 master-0 kubenswrapper[4169]: I0219 03:03:16.312467 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:03:17.074641 master-0 kubenswrapper[4169]: I0219 03:03:17.074527 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:17.189879 master-0 kubenswrapper[4169]: E0219 03:03:17.189784 4169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:03:17.205535 master-0 kubenswrapper[4169]: I0219 03:03:17.205454 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:17.206688 master-0 kubenswrapper[4169]: I0219 03:03:17.206630 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:17.207062 master-0 kubenswrapper[4169]: I0219 03:03:17.206691 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:17.207159 master-0 kubenswrapper[4169]: I0219 03:03:17.207068 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:17.207159 master-0 kubenswrapper[4169]: I0219 03:03:17.207150 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:17.213183 master-0 kubenswrapper[4169]: E0219 03:03:17.212912 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 03:03:17.215196 master-0 kubenswrapper[4169]: E0219 03:03:17.215135 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 19 03:03:17.309228 master-0 kubenswrapper[4169]: I0219 03:03:17.309174 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:17.310071 master-0 kubenswrapper[4169]: I0219 03:03:17.310044 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:17.310196 master-0 kubenswrapper[4169]: I0219 03:03:17.310080 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:17.310196 master-0 kubenswrapper[4169]: I0219 03:03:17.310092 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:17.984949 master-0 kubenswrapper[4169]: I0219 03:03:17.984814 4169 csr.go:261] certificate signing request csr-2zzns is approved, waiting to be issued Feb 19 03:03:18.073810 master-0 kubenswrapper[4169]: I0219 03:03:18.073739 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:18.740661 master-0 kubenswrapper[4169]: I0219 03:03:18.740563 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:18.741487 master-0 kubenswrapper[4169]: I0219 03:03:18.740785 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:18.742187 master-0 kubenswrapper[4169]: I0219 03:03:18.742132 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:18.742187 master-0 kubenswrapper[4169]: I0219 03:03:18.742187 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:18.742396 master-0 kubenswrapper[4169]: I0219 03:03:18.742204 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:18.746497 master-0 kubenswrapper[4169]: I0219 03:03:18.746446 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:18.837720 master-0 kubenswrapper[4169]: I0219 03:03:18.837608 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:18.845382 master-0 kubenswrapper[4169]: I0219 03:03:18.845302 4169 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:19.072810 master-0 kubenswrapper[4169]: I0219 03:03:19.072688 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:19.312936 master-0 kubenswrapper[4169]: I0219 03:03:19.312877 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:19.313172 master-0 kubenswrapper[4169]: I0219 03:03:19.312889 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:19.313172 master-0 kubenswrapper[4169]: I0219 03:03:19.313071 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:19.313554 master-0 kubenswrapper[4169]: I0219 03:03:19.313521 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:19.313595 master-0 kubenswrapper[4169]: I0219 03:03:19.313563 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:19.313595 master-0 kubenswrapper[4169]: I0219 03:03:19.313576 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:19.316130 master-0 kubenswrapper[4169]: I0219 03:03:19.316096 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:20.072910 master-0 kubenswrapper[4169]: I0219 03:03:20.072853 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:20.319572 master-0 kubenswrapper[4169]: I0219 03:03:20.319492 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:20.320847 master-0 kubenswrapper[4169]: I0219 03:03:20.320806 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:20.320847 master-0 kubenswrapper[4169]: I0219 03:03:20.320849 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:20.321009 master-0 kubenswrapper[4169]: I0219 03:03:20.320909 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:21.074583 master-0 kubenswrapper[4169]: I0219 03:03:21.074499 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:21.226899 master-0 kubenswrapper[4169]: I0219 03:03:21.226827 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:21.228076 master-0 kubenswrapper[4169]: I0219 03:03:21.228035 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:21.228117 master-0 kubenswrapper[4169]: I0219 03:03:21.228091 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:21.228117 master-0 kubenswrapper[4169]: I0219 03:03:21.228108 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:21.228569 master-0 kubenswrapper[4169]: I0219 03:03:21.228541 4169 scope.go:117] "RemoveContainer" containerID="b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0" Feb 19 03:03:21.237023 master-0 kubenswrapper[4169]: E0219 03:03:21.236868 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf36de4026\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf36de4026 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.257568806 +0000 UTC m=+8.003760541,LastTimestamp:2026-02-19 03:03:21.230695178 +0000 UTC m=+24.976886923,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:21.321917 master-0 kubenswrapper[4169]: I0219 03:03:21.321855 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:21.322852 master-0 kubenswrapper[4169]: I0219 03:03:21.322825 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:21.322913 master-0 kubenswrapper[4169]: I0219 03:03:21.322861 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:21.322913 master-0 kubenswrapper[4169]: I0219 03:03:21.322872 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:21.326683 master-0 kubenswrapper[4169]: I0219 03:03:21.326613 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:03:21.410761 master-0 kubenswrapper[4169]: E0219 03:03:21.410598 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf514834f6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf514834f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.700720374 +0000 UTC m=+8.446912109,LastTimestamp:2026-02-19 03:03:21.404305892 +0000 UTC m=+25.150497677,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:21.421213 master-0 kubenswrapper[4169]: E0219 03:03:21.421019 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586bf529e9b18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586bf529e9b18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:04.723159832 +0000 UTC m=+8.469351607,LastTimestamp:2026-02-19 03:03:21.414841486 +0000 UTC m=+25.161033231,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:22.072770 master-0 kubenswrapper[4169]: I0219 03:03:22.072729 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:22.326723 master-0 kubenswrapper[4169]: I0219 03:03:22.326557 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 19 03:03:22.327454 master-0 kubenswrapper[4169]: I0219 03:03:22.327066 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 19 03:03:22.327525 master-0 kubenswrapper[4169]: I0219 03:03:22.327481 4169 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" exitCode=1 Feb 19 03:03:22.327630 master-0 kubenswrapper[4169]: I0219 03:03:22.327595 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:22.327690 master-0 kubenswrapper[4169]: I0219 03:03:22.327610 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb"} Feb 19 03:03:22.327755 master-0 kubenswrapper[4169]: I0219 03:03:22.327697 4169 scope.go:117] "RemoveContainer" containerID="b484a3e16e1150999d6572eb5c0f1d44cfd715ab5fadfe3ef26dc7255237f8f0" Feb 19 03:03:22.327859 master-0 kubenswrapper[4169]: I0219 03:03:22.327755 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:22.328897 master-0 kubenswrapper[4169]: I0219 03:03:22.328857 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:22.328897 master-0 kubenswrapper[4169]: I0219 03:03:22.328893 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:22.329103 master-0 kubenswrapper[4169]: I0219 03:03:22.328909 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:22.329103 master-0 kubenswrapper[4169]: I0219 03:03:22.328964 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:22.329103 master-0 kubenswrapper[4169]: I0219 03:03:22.328986 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:22.329103 master-0 kubenswrapper[4169]: I0219 03:03:22.329001 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:22.329512 master-0 kubenswrapper[4169]: I0219 03:03:22.329457 4169 scope.go:117] "RemoveContainer" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" Feb 19 03:03:22.330239 master-0 kubenswrapper[4169]: E0219 03:03:22.329624 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 19 03:03:22.337339 master-0 kubenswrapper[4169]: E0219 03:03:22.337187 4169 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189586c0262f8937\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189586c0262f8937 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:03:08.272650551 +0000 UTC m=+12.018842296,LastTimestamp:2026-02-19 03:03:22.329589044 +0000 UTC m=+26.075780789,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:03:23.074027 master-0 kubenswrapper[4169]: I0219 03:03:23.073929 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:23.331552 master-0 kubenswrapper[4169]: I0219 03:03:23.331410 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 19 03:03:24.076665 master-0 kubenswrapper[4169]: I0219 03:03:24.076599 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:24.216152 master-0 kubenswrapper[4169]: I0219 03:03:24.216070 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:24.217424 master-0 kubenswrapper[4169]: I0219 03:03:24.217380 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:24.217488 master-0 kubenswrapper[4169]: I0219 03:03:24.217427 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:24.217488 master-0 kubenswrapper[4169]: I0219 03:03:24.217440 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:24.217555 master-0 kubenswrapper[4169]: I0219 03:03:24.217491 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:24.220370 master-0 kubenswrapper[4169]: E0219 03:03:24.220270 4169 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 19 03:03:24.220608 master-0 kubenswrapper[4169]: E0219 03:03:24.220389 4169 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 03:03:25.068897 master-0 kubenswrapper[4169]: I0219 03:03:25.068848 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:26.071779 master-0 kubenswrapper[4169]: I0219 03:03:26.071739 4169 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 03:03:26.657555 master-0 kubenswrapper[4169]: I0219 03:03:26.657473 4169 csr.go:257] certificate signing request csr-2zzns is issued Feb 19 03:03:26.953067 master-0 kubenswrapper[4169]: I0219 03:03:26.952960 4169 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 03:03:27.076051 master-0 kubenswrapper[4169]: I0219 03:03:27.075986 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.090154 master-0 kubenswrapper[4169]: I0219 03:03:27.090099 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.152080 master-0 kubenswrapper[4169]: I0219 03:03:27.152041 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.189996 master-0 kubenswrapper[4169]: E0219 03:03:27.189930 4169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:03:27.431227 master-0 kubenswrapper[4169]: I0219 03:03:27.431174 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.431422 master-0 kubenswrapper[4169]: E0219 03:03:27.431243 4169 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 19 03:03:27.455176 master-0 kubenswrapper[4169]: I0219 03:03:27.455108 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.472339 master-0 kubenswrapper[4169]: I0219 03:03:27.472250 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.531350 master-0 kubenswrapper[4169]: I0219 03:03:27.531290 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.659315 master-0 kubenswrapper[4169]: I0219 03:03:27.659237 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 20:35:22.713400905 +0000 UTC Feb 19 03:03:27.659315 master-0 kubenswrapper[4169]: I0219 03:03:27.659306 4169 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h31m55.054099885s for next certificate rotation Feb 19 03:03:27.809453 master-0 kubenswrapper[4169]: I0219 03:03:27.809322 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.809453 master-0 kubenswrapper[4169]: E0219 03:03:27.809370 4169 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 19 03:03:27.907227 master-0 kubenswrapper[4169]: I0219 03:03:27.907134 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.922362 master-0 kubenswrapper[4169]: I0219 03:03:27.922288 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:27.987110 master-0 kubenswrapper[4169]: I0219 03:03:27.987046 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:28.265120 master-0 kubenswrapper[4169]: I0219 03:03:28.265056 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:28.265120 master-0 kubenswrapper[4169]: E0219 03:03:28.265088 4169 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 19 03:03:28.390452 master-0 kubenswrapper[4169]: I0219 03:03:28.390347 4169 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 03:03:28.823277 master-0 kubenswrapper[4169]: I0219 03:03:28.823211 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:28.838312 master-0 kubenswrapper[4169]: I0219 03:03:28.838242 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:28.894721 master-0 kubenswrapper[4169]: I0219 03:03:28.894677 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:29.169750 master-0 kubenswrapper[4169]: I0219 03:03:29.169658 4169 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 19 03:03:29.169750 master-0 kubenswrapper[4169]: E0219 03:03:29.169698 4169 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 19 03:03:30.215118 master-0 kubenswrapper[4169]: I0219 03:03:30.215050 4169 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 03:03:31.221487 master-0 kubenswrapper[4169]: I0219 03:03:31.221418 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:31.222386 master-0 kubenswrapper[4169]: I0219 03:03:31.222352 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:31.222386 master-0 kubenswrapper[4169]: I0219 03:03:31.222388 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:31.222497 master-0 kubenswrapper[4169]: I0219 03:03:31.222396 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:31.222497 master-0 kubenswrapper[4169]: I0219 03:03:31.222438 4169 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:03:31.226322 master-0 kubenswrapper[4169]: E0219 03:03:31.226281 4169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 19 03:03:31.230727 master-0 kubenswrapper[4169]: I0219 03:03:31.230651 4169 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 19 03:03:31.230871 master-0 kubenswrapper[4169]: E0219 03:03:31.230727 4169 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 19 03:03:31.241701 master-0 kubenswrapper[4169]: E0219 03:03:31.241660 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.341884 master-0 kubenswrapper[4169]: E0219 03:03:31.341788 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.442697 master-0 kubenswrapper[4169]: E0219 03:03:31.442609 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.543849 master-0 kubenswrapper[4169]: E0219 03:03:31.543714 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.644748 master-0 kubenswrapper[4169]: E0219 03:03:31.644657 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.745767 master-0 kubenswrapper[4169]: E0219 03:03:31.745651 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.846670 master-0 kubenswrapper[4169]: E0219 03:03:31.846484 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:31.947723 master-0 kubenswrapper[4169]: E0219 03:03:31.947638 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.047875 master-0 kubenswrapper[4169]: E0219 03:03:32.047809 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.096018 master-0 kubenswrapper[4169]: I0219 03:03:32.095961 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 03:03:32.109885 master-0 kubenswrapper[4169]: I0219 03:03:32.109708 4169 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 03:03:32.148686 master-0 kubenswrapper[4169]: E0219 03:03:32.148572 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.249088 master-0 kubenswrapper[4169]: E0219 03:03:32.248999 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.349290 master-0 kubenswrapper[4169]: E0219 03:03:32.349137 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.450088 master-0 kubenswrapper[4169]: E0219 03:03:32.449967 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.551054 master-0 kubenswrapper[4169]: E0219 03:03:32.550973 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.652148 master-0 kubenswrapper[4169]: E0219 03:03:32.652061 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.752965 master-0 kubenswrapper[4169]: E0219 03:03:32.752849 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.853767 master-0 kubenswrapper[4169]: E0219 03:03:32.853702 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:32.954583 master-0 kubenswrapper[4169]: E0219 03:03:32.954516 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.054967 master-0 kubenswrapper[4169]: E0219 03:03:33.054762 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.155721 master-0 kubenswrapper[4169]: E0219 03:03:33.155647 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.227021 master-0 kubenswrapper[4169]: I0219 03:03:33.226904 4169 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:03:33.228451 master-0 kubenswrapper[4169]: I0219 03:03:33.228391 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:03:33.228451 master-0 kubenswrapper[4169]: I0219 03:03:33.228443 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:03:33.228451 master-0 kubenswrapper[4169]: I0219 03:03:33.228456 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:03:33.229039 master-0 kubenswrapper[4169]: I0219 03:03:33.228989 4169 scope.go:117] "RemoveContainer" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" Feb 19 03:03:33.229220 master-0 kubenswrapper[4169]: E0219 03:03:33.229164 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 19 03:03:33.255843 master-0 kubenswrapper[4169]: E0219 03:03:33.255743 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.356507 master-0 kubenswrapper[4169]: E0219 03:03:33.356309 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.457482 master-0 kubenswrapper[4169]: E0219 03:03:33.457388 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.557808 master-0 kubenswrapper[4169]: E0219 03:03:33.557702 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.610467 master-0 kubenswrapper[4169]: I0219 03:03:33.610327 4169 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 03:03:33.658296 master-0 kubenswrapper[4169]: E0219 03:03:33.658168 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.759389 master-0 kubenswrapper[4169]: E0219 03:03:33.759225 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.859696 master-0 kubenswrapper[4169]: E0219 03:03:33.859592 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.960776 master-0 kubenswrapper[4169]: E0219 03:03:33.960656 4169 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:03:33.986145 master-0 kubenswrapper[4169]: I0219 03:03:33.986050 4169 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 03:03:34.072592 master-0 kubenswrapper[4169]: I0219 03:03:34.072480 4169 apiserver.go:52] "Watching apiserver" Feb 19 03:03:34.076880 master-0 kubenswrapper[4169]: I0219 03:03:34.076802 4169 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 03:03:34.077196 master-0 kubenswrapper[4169]: I0219 03:03:34.077130 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt","openshift-network-operator/network-operator-7d7db75979-jbztp"] Feb 19 03:03:34.077725 master-0 kubenswrapper[4169]: I0219 03:03:34.077663 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.077870 master-0 kubenswrapper[4169]: I0219 03:03:34.077754 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.080222 master-0 kubenswrapper[4169]: I0219 03:03:34.080160 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 03:03:34.080611 master-0 kubenswrapper[4169]: I0219 03:03:34.080554 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 03:03:34.081228 master-0 kubenswrapper[4169]: I0219 03:03:34.081166 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 03:03:34.081429 master-0 kubenswrapper[4169]: I0219 03:03:34.081307 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 03:03:34.081429 master-0 kubenswrapper[4169]: I0219 03:03:34.081373 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 03:03:34.081594 master-0 kubenswrapper[4169]: I0219 03:03:34.081554 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 03:03:34.175599 master-0 kubenswrapper[4169]: I0219 03:03:34.175495 4169 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 19 03:03:34.217907 master-0 kubenswrapper[4169]: I0219 03:03:34.217787 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.217907 master-0 kubenswrapper[4169]: I0219 03:03:34.217857 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.218085 master-0 kubenswrapper[4169]: I0219 03:03:34.217910 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.218085 master-0 kubenswrapper[4169]: I0219 03:03:34.217956 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.218085 master-0 kubenswrapper[4169]: I0219 03:03:34.218003 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.218085 master-0 kubenswrapper[4169]: I0219 03:03:34.218056 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.218365 master-0 kubenswrapper[4169]: I0219 03:03:34.218104 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.218365 master-0 kubenswrapper[4169]: I0219 03:03:34.218182 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.318508 master-0 kubenswrapper[4169]: I0219 03:03:34.318413 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318640 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318702 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318747 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318795 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318777 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318877 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318913 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: I0219 03:03:34.318973 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.319158 master-0 kubenswrapper[4169]: E0219 03:03:34.319075 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:34.320104 master-0 kubenswrapper[4169]: E0219 03:03:34.319173 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:03:34.819122745 +0000 UTC m=+38.565314510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:34.320104 master-0 kubenswrapper[4169]: I0219 03:03:34.319764 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.320104 master-0 kubenswrapper[4169]: I0219 03:03:34.319793 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.320521 master-0 kubenswrapper[4169]: I0219 03:03:34.320310 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.320521 master-0 kubenswrapper[4169]: I0219 03:03:34.320485 4169 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 03:03:34.328687 master-0 kubenswrapper[4169]: I0219 03:03:34.328392 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.349481 master-0 kubenswrapper[4169]: I0219 03:03:34.349414 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.354205 master-0 kubenswrapper[4169]: I0219 03:03:34.354125 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.401376 master-0 kubenswrapper[4169]: I0219 03:03:34.401309 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:03:34.418090 master-0 kubenswrapper[4169]: W0219 03:03:34.417816 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc791d8d0_6d78_4cdc_bac2_aa39bd3aae21.slice/crio-270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c WatchSource:0}: Error finding container 270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c: Status 404 returned error can't find the container with id 270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c Feb 19 03:03:34.822470 master-0 kubenswrapper[4169]: I0219 03:03:34.822386 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:34.822759 master-0 kubenswrapper[4169]: E0219 03:03:34.822520 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:34.822759 master-0 kubenswrapper[4169]: E0219 03:03:34.822597 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:03:35.822576755 +0000 UTC m=+39.568768500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:35.359431 master-0 kubenswrapper[4169]: I0219 03:03:35.359365 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerStarted","Data":"270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c"} Feb 19 03:03:35.522611 master-0 kubenswrapper[4169]: I0219 03:03:35.522565 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-tw8v2"] Feb 19 03:03:35.522888 master-0 kubenswrapper[4169]: I0219 03:03:35.522839 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.525670 master-0 kubenswrapper[4169]: I0219 03:03:35.525422 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 19 03:03:35.525887 master-0 kubenswrapper[4169]: I0219 03:03:35.525849 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 19 03:03:35.525887 master-0 kubenswrapper[4169]: I0219 03:03:35.525876 4169 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 19 03:03:35.526242 master-0 kubenswrapper[4169]: I0219 03:03:35.526206 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 19 03:03:35.627612 master-0 kubenswrapper[4169]: I0219 03:03:35.627569 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.627612 master-0 kubenswrapper[4169]: I0219 03:03:35.627606 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.627839 master-0 kubenswrapper[4169]: I0219 03:03:35.627622 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.627839 master-0 kubenswrapper[4169]: I0219 03:03:35.627654 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4slx\" (UniqueName: \"kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.627839 master-0 kubenswrapper[4169]: I0219 03:03:35.627685 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.728665 master-0 kubenswrapper[4169]: I0219 03:03:35.728599 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4slx\" (UniqueName: \"kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.728898 master-0 kubenswrapper[4169]: I0219 03:03:35.728680 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.728898 master-0 kubenswrapper[4169]: I0219 03:03:35.728733 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729020 master-0 kubenswrapper[4169]: I0219 03:03:35.728960 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729072 master-0 kubenswrapper[4169]: I0219 03:03:35.729041 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729336 master-0 kubenswrapper[4169]: I0219 03:03:35.729298 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729399 master-0 kubenswrapper[4169]: I0219 03:03:35.729352 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729478 master-0 kubenswrapper[4169]: I0219 03:03:35.729436 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.729529 master-0 kubenswrapper[4169]: I0219 03:03:35.729482 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.750533 master-0 kubenswrapper[4169]: I0219 03:03:35.750466 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4slx\" (UniqueName: \"kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx\") pod \"assisted-installer-controller-tw8v2\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.830145 master-0 kubenswrapper[4169]: I0219 03:03:35.830059 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:35.830375 master-0 kubenswrapper[4169]: E0219 03:03:35.830298 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:35.830422 master-0 kubenswrapper[4169]: E0219 03:03:35.830403 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:03:37.830368057 +0000 UTC m=+41.576559832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:35.855736 master-0 kubenswrapper[4169]: I0219 03:03:35.855673 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:35.867158 master-0 kubenswrapper[4169]: W0219 03:03:35.867094 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e244dcb_df20_4a7c_bc0a_14ba63c54a9f.slice/crio-0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26 WatchSource:0}: Error finding container 0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26: Status 404 returned error can't find the container with id 0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26 Feb 19 03:03:36.363318 master-0 kubenswrapper[4169]: I0219 03:03:36.363225 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tw8v2" event={"ID":"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f","Type":"ContainerStarted","Data":"0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26"} Feb 19 03:03:37.844644 master-0 kubenswrapper[4169]: I0219 03:03:37.844551 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:37.845210 master-0 kubenswrapper[4169]: E0219 03:03:37.844751 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:37.845210 master-0 kubenswrapper[4169]: E0219 03:03:37.844837 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:03:41.844812811 +0000 UTC m=+45.591004586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:39.267006 master-0 kubenswrapper[4169]: I0219 03:03:39.266930 4169 csr.go:261] certificate signing request csr-wd8q2 is approved, waiting to be issued Feb 19 03:03:39.295962 master-0 kubenswrapper[4169]: I0219 03:03:39.295806 4169 csr.go:257] certificate signing request csr-wd8q2 is issued Feb 19 03:03:40.296993 master-0 kubenswrapper[4169]: I0219 03:03:40.296883 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 23:15:08.727065505 +0000 UTC Feb 19 03:03:40.296993 master-0 kubenswrapper[4169]: I0219 03:03:40.296930 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h11m28.430138613s for next certificate rotation Feb 19 03:03:40.374782 master-0 kubenswrapper[4169]: I0219 03:03:40.374105 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerStarted","Data":"8b3bceeaced74d609ab5cae3f8bcf4b942c0f6e35aacd59b863ae5c7bc32a8c0"} Feb 19 03:03:41.297566 master-0 kubenswrapper[4169]: I0219 03:03:41.297476 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 20:17:15.410840395 +0000 UTC Feb 19 03:03:41.297566 master-0 kubenswrapper[4169]: I0219 03:03:41.297526 4169 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h13m34.113318029s for next certificate rotation Feb 19 03:03:41.789470 master-0 kubenswrapper[4169]: I0219 03:03:41.789241 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" podStartSLOduration=4.9181454030000005 podStartE2EDuration="9.789212649s" podCreationTimestamp="2026-02-19 03:03:32 +0000 UTC" firstStartedPulling="2026-02-19 03:03:34.420433575 +0000 UTC m=+38.166625350" lastFinishedPulling="2026-02-19 03:03:39.291500871 +0000 UTC m=+43.037692596" observedRunningTime="2026-02-19 03:03:40.416999141 +0000 UTC m=+44.163190876" watchObservedRunningTime="2026-02-19 03:03:41.789212649 +0000 UTC m=+45.535404444" Feb 19 03:03:41.789725 master-0 kubenswrapper[4169]: I0219 03:03:41.789610 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-b4t5r"] Feb 19 03:03:41.790157 master-0 kubenswrapper[4169]: I0219 03:03:41.790103 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:41.878978 master-0 kubenswrapper[4169]: I0219 03:03:41.878905 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:41.879177 master-0 kubenswrapper[4169]: I0219 03:03:41.878987 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzxp\" (UniqueName: \"kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp\") pod \"mtu-prober-b4t5r\" (UID: \"bd7240e7-9923-4485-a055-0e1364954af9\") " pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:41.879177 master-0 kubenswrapper[4169]: E0219 03:03:41.879065 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:41.879177 master-0 kubenswrapper[4169]: E0219 03:03:41.879153 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:03:49.879130832 +0000 UTC m=+53.625322587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:41.979928 master-0 kubenswrapper[4169]: I0219 03:03:41.979867 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znzxp\" (UniqueName: \"kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp\") pod \"mtu-prober-b4t5r\" (UID: \"bd7240e7-9923-4485-a055-0e1364954af9\") " pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:42.010198 master-0 kubenswrapper[4169]: I0219 03:03:42.010138 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znzxp\" (UniqueName: \"kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp\") pod \"mtu-prober-b4t5r\" (UID: \"bd7240e7-9923-4485-a055-0e1364954af9\") " pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:42.105232 master-0 kubenswrapper[4169]: I0219 03:03:42.105094 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:43.398510 master-0 kubenswrapper[4169]: W0219 03:03:43.398465 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7240e7_9923_4485_a055_0e1364954af9.slice/crio-283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d WatchSource:0}: Error finding container 283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d: Status 404 returned error can't find the container with id 283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d Feb 19 03:03:44.386445 master-0 kubenswrapper[4169]: I0219 03:03:44.386352 4169 generic.go:334] "Generic (PLEG): container finished" podID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerID="23060c94450b0089de5446d5e52f8e87d35f8af868d80c88ad4e43f6b97218f6" exitCode=0 Feb 19 03:03:44.386445 master-0 kubenswrapper[4169]: I0219 03:03:44.386432 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tw8v2" event={"ID":"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f","Type":"ContainerDied","Data":"23060c94450b0089de5446d5e52f8e87d35f8af868d80c88ad4e43f6b97218f6"} Feb 19 03:03:44.389009 master-0 kubenswrapper[4169]: I0219 03:03:44.388958 4169 generic.go:334] "Generic (PLEG): container finished" podID="bd7240e7-9923-4485-a055-0e1364954af9" containerID="ea7babb48d9acc19a51058d43972a14b4a1ed0d3f15fadbbc95a57a23953a57e" exitCode=0 Feb 19 03:03:44.389138 master-0 kubenswrapper[4169]: I0219 03:03:44.389009 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-b4t5r" event={"ID":"bd7240e7-9923-4485-a055-0e1364954af9","Type":"ContainerDied","Data":"ea7babb48d9acc19a51058d43972a14b4a1ed0d3f15fadbbc95a57a23953a57e"} Feb 19 03:03:44.389138 master-0 kubenswrapper[4169]: I0219 03:03:44.389065 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-b4t5r" event={"ID":"bd7240e7-9923-4485-a055-0e1364954af9","Type":"ContainerStarted","Data":"283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d"} Feb 19 03:03:45.419378 master-0 kubenswrapper[4169]: I0219 03:03:45.419311 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:45.427401 master-0 kubenswrapper[4169]: I0219 03:03:45.427343 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:45.503127 master-0 kubenswrapper[4169]: I0219 03:03:45.503009 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle\") pod \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " Feb 19 03:03:45.503127 master-0 kubenswrapper[4169]: I0219 03:03:45.503114 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files\") pod \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503174 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4slx\" (UniqueName: \"kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx\") pod \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503208 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" (UID: "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503227 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znzxp\" (UniqueName: \"kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp\") pod \"bd7240e7-9923-4485-a055-0e1364954af9\" (UID: \"bd7240e7-9923-4485-a055-0e1364954af9\") " Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503298 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf\") pod \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503343 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" (UID: "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503434 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" (UID: "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:03:45.503479 master-0 kubenswrapper[4169]: I0219 03:03:45.503487 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf\") pod \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\" (UID: \"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f\") " Feb 19 03:03:45.503930 master-0 kubenswrapper[4169]: I0219 03:03:45.503486 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" (UID: "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:03:45.503930 master-0 kubenswrapper[4169]: I0219 03:03:45.503587 4169 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:45.503930 master-0 kubenswrapper[4169]: I0219 03:03:45.503613 4169 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:45.503930 master-0 kubenswrapper[4169]: I0219 03:03:45.503632 4169 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:45.508493 master-0 kubenswrapper[4169]: I0219 03:03:45.508418 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp" (OuterVolumeSpecName: "kube-api-access-znzxp") pod "bd7240e7-9923-4485-a055-0e1364954af9" (UID: "bd7240e7-9923-4485-a055-0e1364954af9"). InnerVolumeSpecName "kube-api-access-znzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:03:45.508852 master-0 kubenswrapper[4169]: I0219 03:03:45.508804 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx" (OuterVolumeSpecName: "kube-api-access-v4slx") pod "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" (UID: "6e244dcb-df20-4a7c-bc0a-14ba63c54a9f"). InnerVolumeSpecName "kube-api-access-v4slx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:03:45.604970 master-0 kubenswrapper[4169]: I0219 03:03:45.604860 4169 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4slx\" (UniqueName: \"kubernetes.io/projected/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-kube-api-access-v4slx\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:45.604970 master-0 kubenswrapper[4169]: I0219 03:03:45.604917 4169 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znzxp\" (UniqueName: \"kubernetes.io/projected/bd7240e7-9923-4485-a055-0e1364954af9-kube-api-access-znzxp\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:45.604970 master-0 kubenswrapper[4169]: I0219 03:03:45.604934 4169 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/6e244dcb-df20-4a7c-bc0a-14ba63c54a9f-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:03:46.242568 master-0 kubenswrapper[4169]: I0219 03:03:46.242491 4169 scope.go:117] "RemoveContainer" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" Feb 19 03:03:46.242568 master-0 kubenswrapper[4169]: I0219 03:03:46.242495 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 19 03:03:46.393544 master-0 kubenswrapper[4169]: I0219 03:03:46.393299 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:03:46.393791 master-0 kubenswrapper[4169]: I0219 03:03:46.393306 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tw8v2" event={"ID":"6e244dcb-df20-4a7c-bc0a-14ba63c54a9f","Type":"ContainerDied","Data":"0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26"} Feb 19 03:03:46.393791 master-0 kubenswrapper[4169]: I0219 03:03:46.393660 4169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26" Feb 19 03:03:46.394913 master-0 kubenswrapper[4169]: I0219 03:03:46.394855 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-b4t5r" event={"ID":"bd7240e7-9923-4485-a055-0e1364954af9","Type":"ContainerDied","Data":"283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d"} Feb 19 03:03:46.394913 master-0 kubenswrapper[4169]: I0219 03:03:46.394881 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-b4t5r" Feb 19 03:03:46.394913 master-0 kubenswrapper[4169]: I0219 03:03:46.394897 4169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d" Feb 19 03:03:46.776518 master-0 kubenswrapper[4169]: I0219 03:03:46.776453 4169 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-b4t5r"] Feb 19 03:03:46.782245 master-0 kubenswrapper[4169]: I0219 03:03:46.782196 4169 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-b4t5r"] Feb 19 03:03:47.232923 master-0 kubenswrapper[4169]: I0219 03:03:47.232844 4169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7240e7-9923-4485-a055-0e1364954af9" path="/var/lib/kubelet/pods/bd7240e7-9923-4485-a055-0e1364954af9/volumes" Feb 19 03:03:47.400093 master-0 kubenswrapper[4169]: I0219 03:03:47.400005 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 19 03:03:47.400558 master-0 kubenswrapper[4169]: I0219 03:03:47.400514 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5063a55beab9e17c44bf467460af64eb399204406812c9ae4e396f59fae30a15"} Feb 19 03:03:49.934688 master-0 kubenswrapper[4169]: I0219 03:03:49.934619 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:03:49.935291 master-0 kubenswrapper[4169]: E0219 03:03:49.934747 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:49.935291 master-0 kubenswrapper[4169]: E0219 03:03:49.934848 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:04:05.934827434 +0000 UTC m=+69.681019169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:03:51.648879 master-0 kubenswrapper[4169]: I0219 03:03:51.648813 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=5.648797249 podStartE2EDuration="5.648797249s" podCreationTimestamp="2026-02-19 03:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:03:47.413298586 +0000 UTC m=+51.159490341" watchObservedRunningTime="2026-02-19 03:03:51.648797249 +0000 UTC m=+55.394988974" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.648925 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4lzdj"] Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: E0219 03:03:51.648989 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.649001 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: E0219 03:03:51.649008 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.649015 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.649034 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.649042 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:03:51.649443 master-0 kubenswrapper[4169]: I0219 03:03:51.649187 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.651290 master-0 kubenswrapper[4169]: I0219 03:03:51.651219 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 03:03:51.652409 master-0 kubenswrapper[4169]: I0219 03:03:51.652383 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 03:03:51.652548 master-0 kubenswrapper[4169]: I0219 03:03:51.652475 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 03:03:51.652930 master-0 kubenswrapper[4169]: I0219 03:03:51.652891 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 03:03:51.750631 master-0 kubenswrapper[4169]: I0219 03:03:51.750534 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750631 master-0 kubenswrapper[4169]: I0219 03:03:51.750590 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750631 master-0 kubenswrapper[4169]: I0219 03:03:51.750612 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750631 master-0 kubenswrapper[4169]: I0219 03:03:51.750632 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750655 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750679 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750697 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750737 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750760 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750832 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750913 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.750949 master-0 kubenswrapper[4169]: I0219 03:03:51.750947 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.751195 master-0 kubenswrapper[4169]: I0219 03:03:51.750975 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.751195 master-0 kubenswrapper[4169]: I0219 03:03:51.751005 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.751195 master-0 kubenswrapper[4169]: I0219 03:03:51.751043 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.751195 master-0 kubenswrapper[4169]: I0219 03:03:51.751085 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.751195 master-0 kubenswrapper[4169]: I0219 03:03:51.751128 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.851941 master-0 kubenswrapper[4169]: I0219 03:03:51.851835 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.851941 master-0 kubenswrapper[4169]: I0219 03:03:51.851888 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852044 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852053 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852066 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852289 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852329 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852365 master-0 kubenswrapper[4169]: I0219 03:03:51.852362 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852390 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852420 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852525 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852563 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852628 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852634 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852676 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852709 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852715 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.852858 master-0 kubenswrapper[4169]: I0219 03:03:51.852754 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.852890 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.852902 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.852938 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.852954 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853001 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853046 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853090 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853148 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853236 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853293 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853692 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.853710 master-0 kubenswrapper[4169]: I0219 03:03:51.853721 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.854648 master-0 kubenswrapper[4169]: I0219 03:03:51.853778 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.854737 master-0 kubenswrapper[4169]: I0219 03:03:51.854655 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.855012 master-0 kubenswrapper[4169]: I0219 03:03:51.854949 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bs5qd"] Feb 19 03:03:51.855166 master-0 kubenswrapper[4169]: I0219 03:03:51.855117 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.855501 master-0 kubenswrapper[4169]: I0219 03:03:51.855450 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.858317 master-0 kubenswrapper[4169]: I0219 03:03:51.858227 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 19 03:03:51.859597 master-0 kubenswrapper[4169]: I0219 03:03:51.859547 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 03:03:51.873716 master-0 kubenswrapper[4169]: I0219 03:03:51.873663 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:03:51.955293 master-0 kubenswrapper[4169]: I0219 03:03:51.955065 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955293 master-0 kubenswrapper[4169]: I0219 03:03:51.955138 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955293 master-0 kubenswrapper[4169]: I0219 03:03:51.955171 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955293 master-0 kubenswrapper[4169]: I0219 03:03:51.955203 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955293 master-0 kubenswrapper[4169]: I0219 03:03:51.955236 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955735 master-0 kubenswrapper[4169]: I0219 03:03:51.955323 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955735 master-0 kubenswrapper[4169]: I0219 03:03:51.955354 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.955735 master-0 kubenswrapper[4169]: I0219 03:03:51.955416 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:51.967285 master-0 kubenswrapper[4169]: I0219 03:03:51.967184 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzdj" Feb 19 03:03:52.056168 master-0 kubenswrapper[4169]: I0219 03:03:52.056052 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056473 master-0 kubenswrapper[4169]: I0219 03:03:52.056203 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056473 master-0 kubenswrapper[4169]: I0219 03:03:52.056231 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056473 master-0 kubenswrapper[4169]: I0219 03:03:52.056249 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056473 master-0 kubenswrapper[4169]: I0219 03:03:52.056292 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056473 master-0 kubenswrapper[4169]: I0219 03:03:52.056407 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056778 master-0 kubenswrapper[4169]: I0219 03:03:52.056492 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056778 master-0 kubenswrapper[4169]: I0219 03:03:52.056589 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056778 master-0 kubenswrapper[4169]: I0219 03:03:52.056663 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056778 master-0 kubenswrapper[4169]: I0219 03:03:52.056704 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.056778 master-0 kubenswrapper[4169]: I0219 03:03:52.056727 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.057178 master-0 kubenswrapper[4169]: I0219 03:03:52.056920 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.057178 master-0 kubenswrapper[4169]: I0219 03:03:52.057003 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.057424 master-0 kubenswrapper[4169]: I0219 03:03:52.057381 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.057646 master-0 kubenswrapper[4169]: I0219 03:03:52.057612 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.072664 master-0 kubenswrapper[4169]: I0219 03:03:52.072611 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.169412 master-0 kubenswrapper[4169]: I0219 03:03:52.169308 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:03:52.188947 master-0 kubenswrapper[4169]: W0219 03:03:52.188882 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8f6a27_3dd3_45e0_a206_9f19bbf99df7.slice/crio-acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6 WatchSource:0}: Error finding container acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6: Status 404 returned error can't find the container with id acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6 Feb 19 03:03:52.414181 master-0 kubenswrapper[4169]: I0219 03:03:52.414137 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerStarted","Data":"acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6"} Feb 19 03:03:52.415851 master-0 kubenswrapper[4169]: I0219 03:03:52.415824 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzdj" event={"ID":"7fde19c2-64b1-409c-ad9c-2bb213a1cc74","Type":"ContainerStarted","Data":"adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8"} Feb 19 03:03:52.641787 master-0 kubenswrapper[4169]: I0219 03:03:52.641695 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hspwc"] Feb 19 03:03:52.642639 master-0 kubenswrapper[4169]: I0219 03:03:52.642319 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:52.642639 master-0 kubenswrapper[4169]: E0219 03:03:52.642448 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:03:52.763682 master-0 kubenswrapper[4169]: I0219 03:03:52.763528 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:52.763682 master-0 kubenswrapper[4169]: I0219 03:03:52.763637 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:52.864248 master-0 kubenswrapper[4169]: I0219 03:03:52.864188 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:52.864248 master-0 kubenswrapper[4169]: I0219 03:03:52.864239 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:52.864523 master-0 kubenswrapper[4169]: E0219 03:03:52.864375 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:52.864523 master-0 kubenswrapper[4169]: E0219 03:03:52.864431 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:03:53.364415496 +0000 UTC m=+57.110607231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:52.882346 master-0 kubenswrapper[4169]: I0219 03:03:52.882245 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:53.367166 master-0 kubenswrapper[4169]: I0219 03:03:53.367126 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:53.367411 master-0 kubenswrapper[4169]: E0219 03:03:53.367270 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:53.367411 master-0 kubenswrapper[4169]: E0219 03:03:53.367323 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:03:54.367307481 +0000 UTC m=+58.113499216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:54.226011 master-0 kubenswrapper[4169]: I0219 03:03:54.225964 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:54.226620 master-0 kubenswrapper[4169]: E0219 03:03:54.226092 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:03:54.375085 master-0 kubenswrapper[4169]: I0219 03:03:54.375025 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:54.375281 master-0 kubenswrapper[4169]: E0219 03:03:54.375225 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:54.375351 master-0 kubenswrapper[4169]: E0219 03:03:54.375334 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:03:56.375314488 +0000 UTC m=+60.121506223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:55.425496 master-0 kubenswrapper[4169]: I0219 03:03:55.425424 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d7038f953677e8d7419f5a2fddb13ce55d744e0baf108c01044bd406543eeae9" exitCode=0 Feb 19 03:03:55.426712 master-0 kubenswrapper[4169]: I0219 03:03:55.425528 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"d7038f953677e8d7419f5a2fddb13ce55d744e0baf108c01044bd406543eeae9"} Feb 19 03:03:56.226943 master-0 kubenswrapper[4169]: I0219 03:03:56.226888 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:56.227178 master-0 kubenswrapper[4169]: E0219 03:03:56.227033 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:03:56.390988 master-0 kubenswrapper[4169]: I0219 03:03:56.390881 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:56.391214 master-0 kubenswrapper[4169]: E0219 03:03:56.391059 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:56.391214 master-0 kubenswrapper[4169]: E0219 03:03:56.391161 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:00.391133379 +0000 UTC m=+64.137325154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:03:58.227057 master-0 kubenswrapper[4169]: I0219 03:03:58.226866 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:03:58.227057 master-0 kubenswrapper[4169]: E0219 03:03:58.226995 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:00.227006 master-0 kubenswrapper[4169]: I0219 03:04:00.226818 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:00.227006 master-0 kubenswrapper[4169]: E0219 03:04:00.226950 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:00.422228 master-0 kubenswrapper[4169]: I0219 03:04:00.422172 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:00.422616 master-0 kubenswrapper[4169]: E0219 03:04:00.422587 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:00.422684 master-0 kubenswrapper[4169]: E0219 03:04:00.422671 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:08.422648735 +0000 UTC m=+72.168840510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:02.226401 master-0 kubenswrapper[4169]: I0219 03:04:02.226355 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:02.227085 master-0 kubenswrapper[4169]: E0219 03:04:02.226486 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:04.199353 master-0 kubenswrapper[4169]: I0219 03:04:04.199296 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h"] Feb 19 03:04:04.200026 master-0 kubenswrapper[4169]: I0219 03:04:04.199626 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.201684 master-0 kubenswrapper[4169]: I0219 03:04:04.201653 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 03:04:04.222283 master-0 kubenswrapper[4169]: I0219 03:04:04.205239 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 03:04:04.222283 master-0 kubenswrapper[4169]: I0219 03:04:04.205449 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 03:04:04.222283 master-0 kubenswrapper[4169]: I0219 03:04:04.205724 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 03:04:04.228585 master-0 kubenswrapper[4169]: I0219 03:04:04.226585 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 03:04:04.231475 master-0 kubenswrapper[4169]: I0219 03:04:04.231429 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:04.231635 master-0 kubenswrapper[4169]: E0219 03:04:04.231593 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:04.349792 master-0 kubenswrapper[4169]: I0219 03:04:04.349748 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.349792 master-0 kubenswrapper[4169]: I0219 03:04:04.349798 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.349985 master-0 kubenswrapper[4169]: I0219 03:04:04.349823 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.349985 master-0 kubenswrapper[4169]: I0219 03:04:04.349842 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.409307 master-0 kubenswrapper[4169]: I0219 03:04:04.409187 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncfjn"] Feb 19 03:04:04.410022 master-0 kubenswrapper[4169]: I0219 03:04:04.409991 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.412127 master-0 kubenswrapper[4169]: I0219 03:04:04.411887 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 03:04:04.412810 master-0 kubenswrapper[4169]: I0219 03:04:04.412718 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 03:04:04.450147 master-0 kubenswrapper[4169]: I0219 03:04:04.450052 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.450147 master-0 kubenswrapper[4169]: I0219 03:04:04.450091 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.450147 master-0 kubenswrapper[4169]: I0219 03:04:04.450115 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.450147 master-0 kubenswrapper[4169]: I0219 03:04:04.450139 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.451635 master-0 kubenswrapper[4169]: I0219 03:04:04.450646 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.451635 master-0 kubenswrapper[4169]: I0219 03:04:04.451570 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.454195 master-0 kubenswrapper[4169]: I0219 03:04:04.454150 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.483151 master-0 kubenswrapper[4169]: I0219 03:04:04.483111 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.550857 master-0 kubenswrapper[4169]: I0219 03:04:04.550778 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551053 master-0 kubenswrapper[4169]: I0219 03:04:04.550871 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551053 master-0 kubenswrapper[4169]: I0219 03:04:04.550952 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551053 master-0 kubenswrapper[4169]: I0219 03:04:04.550976 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551053 master-0 kubenswrapper[4169]: I0219 03:04:04.551047 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551244 master-0 kubenswrapper[4169]: I0219 03:04:04.551137 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551244 master-0 kubenswrapper[4169]: I0219 03:04:04.551177 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551244 master-0 kubenswrapper[4169]: I0219 03:04:04.551195 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551244 master-0 kubenswrapper[4169]: I0219 03:04:04.551213 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551386 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551535 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551588 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551667 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551718 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.551822 master-0 kubenswrapper[4169]: I0219 03:04:04.551766 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552022 master-0 kubenswrapper[4169]: I0219 03:04:04.551829 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552022 master-0 kubenswrapper[4169]: I0219 03:04:04.551915 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552022 master-0 kubenswrapper[4169]: I0219 03:04:04.551990 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkrc\" (UniqueName: \"kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552124 master-0 kubenswrapper[4169]: I0219 03:04:04.552040 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552124 master-0 kubenswrapper[4169]: I0219 03:04:04.552106 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.552987 master-0 kubenswrapper[4169]: I0219 03:04:04.552594 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:04.653304 master-0 kubenswrapper[4169]: I0219 03:04:04.653104 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653304 master-0 kubenswrapper[4169]: I0219 03:04:04.653179 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653304 master-0 kubenswrapper[4169]: I0219 03:04:04.653211 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653304 master-0 kubenswrapper[4169]: I0219 03:04:04.653215 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653304 master-0 kubenswrapper[4169]: I0219 03:04:04.653233 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653606 master-0 kubenswrapper[4169]: I0219 03:04:04.653392 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653606 master-0 kubenswrapper[4169]: I0219 03:04:04.653500 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653606 master-0 kubenswrapper[4169]: I0219 03:04:04.653539 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653606 master-0 kubenswrapper[4169]: I0219 03:04:04.653561 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653606 master-0 kubenswrapper[4169]: I0219 03:04:04.653603 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653757 master-0 kubenswrapper[4169]: I0219 03:04:04.653610 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653757 master-0 kubenswrapper[4169]: I0219 03:04:04.653627 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653757 master-0 kubenswrapper[4169]: I0219 03:04:04.653688 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653757 master-0 kubenswrapper[4169]: I0219 03:04:04.653708 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653757 master-0 kubenswrapper[4169]: I0219 03:04:04.653732 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653951 master-0 kubenswrapper[4169]: I0219 03:04:04.653774 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653951 master-0 kubenswrapper[4169]: I0219 03:04:04.653807 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653951 master-0 kubenswrapper[4169]: I0219 03:04:04.653841 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653951 master-0 kubenswrapper[4169]: I0219 03:04:04.653874 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.653951 master-0 kubenswrapper[4169]: I0219 03:04:04.653903 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654089 master-0 kubenswrapper[4169]: I0219 03:04:04.653955 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654089 master-0 kubenswrapper[4169]: I0219 03:04:04.654012 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654089 master-0 kubenswrapper[4169]: I0219 03:04:04.654047 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654089 master-0 kubenswrapper[4169]: I0219 03:04:04.654081 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654219 master-0 kubenswrapper[4169]: I0219 03:04:04.654113 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654219 master-0 kubenswrapper[4169]: I0219 03:04:04.654151 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkrc\" (UniqueName: \"kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654466 master-0 kubenswrapper[4169]: I0219 03:04:04.654353 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.654466 master-0 kubenswrapper[4169]: I0219 03:04:04.654405 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654535 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654616 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654657 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654704 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654731 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654750 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654771 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.654992 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655396 master-0 kubenswrapper[4169]: I0219 03:04:04.655361 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.655735 master-0 kubenswrapper[4169]: I0219 03:04:04.655466 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.656752 master-0 kubenswrapper[4169]: I0219 03:04:04.656701 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:04.893620 master-0 kubenswrapper[4169]: I0219 03:04:04.891395 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkrc\" (UniqueName: \"kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc\") pod \"ovnkube-node-ncfjn\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:05.020317 master-0 kubenswrapper[4169]: I0219 03:04:05.020244 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:05.965565 master-0 kubenswrapper[4169]: I0219 03:04:05.965455 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:05.966638 master-0 kubenswrapper[4169]: E0219 03:04:05.965675 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:05.966638 master-0 kubenswrapper[4169]: E0219 03:04:05.965746 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:04:37.965723032 +0000 UTC m=+101.711914787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:06.021655 master-0 kubenswrapper[4169]: W0219 03:04:06.021593 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15a571c6_7c47_4b57_bc5b_e46544a114c8.slice/crio-1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a WatchSource:0}: Error finding container 1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a: Status 404 returned error can't find the container with id 1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a Feb 19 03:04:06.022374 master-0 kubenswrapper[4169]: W0219 03:04:06.022299 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod429773fe_5f3f_45d0_a13b_04efaa74ce9a.slice/crio-e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7 WatchSource:0}: Error finding container e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7: Status 404 returned error can't find the container with id e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7 Feb 19 03:04:06.226900 master-0 kubenswrapper[4169]: I0219 03:04:06.226794 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:06.227037 master-0 kubenswrapper[4169]: E0219 03:04:06.226952 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:06.453376 master-0 kubenswrapper[4169]: I0219 03:04:06.453320 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7"} Feb 19 03:04:06.455721 master-0 kubenswrapper[4169]: I0219 03:04:06.455672 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerStarted","Data":"1d12abedf46d8aff34bd66e07958725024ac8d243ab6a4a7af0e463469e1c0b0"} Feb 19 03:04:06.455721 master-0 kubenswrapper[4169]: I0219 03:04:06.455709 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerStarted","Data":"1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a"} Feb 19 03:04:06.457600 master-0 kubenswrapper[4169]: I0219 03:04:06.457540 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzdj" event={"ID":"7fde19c2-64b1-409c-ad9c-2bb213a1cc74","Type":"ContainerStarted","Data":"f8ca1439c02bba1864c7ec6202495cd04dc1065cfc0db3b5df68736212357172"} Feb 19 03:04:06.461027 master-0 kubenswrapper[4169]: I0219 03:04:06.460845 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="87ced28296b6205caeec80cb40be9541d7f81c97bea9198b50ce4babeda1daa1" exitCode=0 Feb 19 03:04:06.461027 master-0 kubenswrapper[4169]: I0219 03:04:06.460919 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"87ced28296b6205caeec80cb40be9541d7f81c97bea9198b50ce4babeda1daa1"} Feb 19 03:04:06.477625 master-0 kubenswrapper[4169]: I0219 03:04:06.477183 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4lzdj" podStartSLOduration=1.3563642169999999 podStartE2EDuration="15.477156139s" podCreationTimestamp="2026-02-19 03:03:51 +0000 UTC" firstStartedPulling="2026-02-19 03:03:51.979455466 +0000 UTC m=+55.725647391" lastFinishedPulling="2026-02-19 03:04:06.100247578 +0000 UTC m=+69.846439313" observedRunningTime="2026-02-19 03:04:06.47414015 +0000 UTC m=+70.220331885" watchObservedRunningTime="2026-02-19 03:04:06.477156139 +0000 UTC m=+70.223347914" Feb 19 03:04:07.240123 master-0 kubenswrapper[4169]: I0219 03:04:07.240069 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-c6c25"] Feb 19 03:04:07.240846 master-0 kubenswrapper[4169]: I0219 03:04:07.240431 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:07.240846 master-0 kubenswrapper[4169]: E0219 03:04:07.240500 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:07.377407 master-0 kubenswrapper[4169]: I0219 03:04:07.377323 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:07.478201 master-0 kubenswrapper[4169]: I0219 03:04:07.478129 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:07.794204 master-0 kubenswrapper[4169]: E0219 03:04:07.793782 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:07.794204 master-0 kubenswrapper[4169]: E0219 03:04:07.793824 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:07.794204 master-0 kubenswrapper[4169]: E0219 03:04:07.793839 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:07.794204 master-0 kubenswrapper[4169]: E0219 03:04:07.793930 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:08.293908732 +0000 UTC m=+72.040100467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: I0219 03:04:08.386268 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: I0219 03:04:08.386243 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: E0219 03:04:08.386467 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: E0219 03:04:08.386474 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: E0219 03:04:08.386528 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: E0219 03:04:08.386542 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:08.387596 master-0 kubenswrapper[4169]: E0219 03:04:08.386586 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:09.386568956 +0000 UTC m=+73.132760701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:08.487949 master-0 kubenswrapper[4169]: I0219 03:04:08.487879 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:08.488136 master-0 kubenswrapper[4169]: E0219 03:04:08.488105 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:08.488237 master-0 kubenswrapper[4169]: E0219 03:04:08.488203 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:24.488181671 +0000 UTC m=+88.234373406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:09.226417 master-0 kubenswrapper[4169]: I0219 03:04:09.226363 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:09.226556 master-0 kubenswrapper[4169]: E0219 03:04:09.226483 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:09.396444 master-0 kubenswrapper[4169]: I0219 03:04:09.396397 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:09.397076 master-0 kubenswrapper[4169]: E0219 03:04:09.396541 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:09.397076 master-0 kubenswrapper[4169]: E0219 03:04:09.396557 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:09.397076 master-0 kubenswrapper[4169]: E0219 03:04:09.396568 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:09.397076 master-0 kubenswrapper[4169]: E0219 03:04:09.396617 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:11.396603174 +0000 UTC m=+75.142794909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:09.469756 master-0 kubenswrapper[4169]: I0219 03:04:09.469708 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d07c6f7253d4f5bf400e52d3abf09e67dc06d685b2053d96aa22769fe9305dd6" exitCode=0 Feb 19 03:04:09.469756 master-0 kubenswrapper[4169]: I0219 03:04:09.469755 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"d07c6f7253d4f5bf400e52d3abf09e67dc06d685b2053d96aa22769fe9305dd6"} Feb 19 03:04:09.939180 master-0 kubenswrapper[4169]: I0219 03:04:09.939135 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-rm5jg"] Feb 19 03:04:09.940111 master-0 kubenswrapper[4169]: I0219 03:04:09.940087 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:09.942514 master-0 kubenswrapper[4169]: I0219 03:04:09.942486 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 03:04:09.942753 master-0 kubenswrapper[4169]: I0219 03:04:09.942716 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 03:04:09.942813 master-0 kubenswrapper[4169]: I0219 03:04:09.942786 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 03:04:09.942853 master-0 kubenswrapper[4169]: I0219 03:04:09.942723 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 03:04:09.943687 master-0 kubenswrapper[4169]: I0219 03:04:09.943660 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 03:04:10.103325 master-0 kubenswrapper[4169]: I0219 03:04:10.103246 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.103325 master-0 kubenswrapper[4169]: I0219 03:04:10.103316 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.103559 master-0 kubenswrapper[4169]: I0219 03:04:10.103374 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.103559 master-0 kubenswrapper[4169]: I0219 03:04:10.103390 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.257624 master-0 kubenswrapper[4169]: I0219 03:04:10.257489 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.257624 master-0 kubenswrapper[4169]: I0219 03:04:10.257522 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:10.257624 master-0 kubenswrapper[4169]: I0219 03:04:10.257547 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.257960 master-0 kubenswrapper[4169]: E0219 03:04:10.257632 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:10.257960 master-0 kubenswrapper[4169]: I0219 03:04:10.257824 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.258123 master-0 kubenswrapper[4169]: I0219 03:04:10.257906 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.258200 master-0 kubenswrapper[4169]: E0219 03:04:10.258121 4169 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 19 03:04:10.258200 master-0 kubenswrapper[4169]: E0219 03:04:10.258184 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert podName:a52be87c-e707-4269-96da-537708d52b64 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:10.758167446 +0000 UTC m=+74.504359181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert") pod "network-node-identity-rm5jg" (UID: "a52be87c-e707-4269-96da-537708d52b64") : secret "network-node-identity-cert" not found Feb 19 03:04:10.258758 master-0 kubenswrapper[4169]: I0219 03:04:10.258709 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.259648 master-0 kubenswrapper[4169]: I0219 03:04:10.259622 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.763108 master-0 kubenswrapper[4169]: I0219 03:04:10.763022 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:10.763643 master-0 kubenswrapper[4169]: E0219 03:04:10.763198 4169 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 19 03:04:10.763643 master-0 kubenswrapper[4169]: E0219 03:04:10.763295 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert podName:a52be87c-e707-4269-96da-537708d52b64 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:11.763276894 +0000 UTC m=+75.509468619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert") pod "network-node-identity-rm5jg" (UID: "a52be87c-e707-4269-96da-537708d52b64") : secret "network-node-identity-cert" not found Feb 19 03:04:11.006418 master-0 kubenswrapper[4169]: I0219 03:04:11.006190 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:11.227013 master-0 kubenswrapper[4169]: I0219 03:04:11.226955 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:11.227226 master-0 kubenswrapper[4169]: E0219 03:04:11.227100 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:11.485734 master-0 kubenswrapper[4169]: I0219 03:04:11.485609 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:11.485894 master-0 kubenswrapper[4169]: E0219 03:04:11.485773 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:11.485894 master-0 kubenswrapper[4169]: E0219 03:04:11.485793 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:11.485894 master-0 kubenswrapper[4169]: E0219 03:04:11.485805 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:11.485894 master-0 kubenswrapper[4169]: E0219 03:04:11.485862 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:15.485848274 +0000 UTC m=+79.232040009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:11.787036 master-0 kubenswrapper[4169]: I0219 03:04:11.786914 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:11.790921 master-0 kubenswrapper[4169]: I0219 03:04:11.790884 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:12.053692 master-0 kubenswrapper[4169]: I0219 03:04:12.053584 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:12.226298 master-0 kubenswrapper[4169]: I0219 03:04:12.226222 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:12.226494 master-0 kubenswrapper[4169]: E0219 03:04:12.226401 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:12.414830 master-0 kubenswrapper[4169]: W0219 03:04:12.414775 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda52be87c_e707_4269_96da_537708d52b64.slice/crio-b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be WatchSource:0}: Error finding container b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be: Status 404 returned error can't find the container with id b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be Feb 19 03:04:12.477524 master-0 kubenswrapper[4169]: I0219 03:04:12.477463 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerStarted","Data":"b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be"} Feb 19 03:04:13.226536 master-0 kubenswrapper[4169]: I0219 03:04:13.226487 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:13.226999 master-0 kubenswrapper[4169]: E0219 03:04:13.226636 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:14.227060 master-0 kubenswrapper[4169]: I0219 03:04:14.226999 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:14.227544 master-0 kubenswrapper[4169]: E0219 03:04:14.227157 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:15.227010 master-0 kubenswrapper[4169]: I0219 03:04:15.226929 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:15.227290 master-0 kubenswrapper[4169]: E0219 03:04:15.227126 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:15.517954 master-0 kubenswrapper[4169]: I0219 03:04:15.517808 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:15.518131 master-0 kubenswrapper[4169]: E0219 03:04:15.517988 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:15.518131 master-0 kubenswrapper[4169]: E0219 03:04:15.518012 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:15.518131 master-0 kubenswrapper[4169]: E0219 03:04:15.518022 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:15.518131 master-0 kubenswrapper[4169]: E0219 03:04:15.518070 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:23.518056343 +0000 UTC m=+87.264248078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:16.226237 master-0 kubenswrapper[4169]: I0219 03:04:16.226189 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:16.226513 master-0 kubenswrapper[4169]: E0219 03:04:16.226396 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:16.490523 master-0 kubenswrapper[4169]: I0219 03:04:16.490422 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a1adfe00d9aa195d9236868bc3cdaa7708f6f91c8e97bcc9dc23bf44a824c667" exitCode=0 Feb 19 03:04:16.490523 master-0 kubenswrapper[4169]: I0219 03:04:16.490472 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"a1adfe00d9aa195d9236868bc3cdaa7708f6f91c8e97bcc9dc23bf44a824c667"} Feb 19 03:04:17.226584 master-0 kubenswrapper[4169]: I0219 03:04:17.226518 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:17.227434 master-0 kubenswrapper[4169]: E0219 03:04:17.227338 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:18.226461 master-0 kubenswrapper[4169]: I0219 03:04:18.226361 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:18.227630 master-0 kubenswrapper[4169]: E0219 03:04:18.226501 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:19.226658 master-0 kubenswrapper[4169]: I0219 03:04:19.226213 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:19.226658 master-0 kubenswrapper[4169]: E0219 03:04:19.226348 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:20.226730 master-0 kubenswrapper[4169]: I0219 03:04:20.226681 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:20.227286 master-0 kubenswrapper[4169]: E0219 03:04:20.226856 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:21.226893 master-0 kubenswrapper[4169]: I0219 03:04:21.226808 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:21.227544 master-0 kubenswrapper[4169]: E0219 03:04:21.226939 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:22.226970 master-0 kubenswrapper[4169]: I0219 03:04:22.226434 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:22.226970 master-0 kubenswrapper[4169]: E0219 03:04:22.226598 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:23.226744 master-0 kubenswrapper[4169]: I0219 03:04:23.226682 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:23.226924 master-0 kubenswrapper[4169]: E0219 03:04:23.226821 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:23.583929 master-0 kubenswrapper[4169]: I0219 03:04:23.583810 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:23.584515 master-0 kubenswrapper[4169]: E0219 03:04:23.584025 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:23.584515 master-0 kubenswrapper[4169]: E0219 03:04:23.584064 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:23.584515 master-0 kubenswrapper[4169]: E0219 03:04:23.584080 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:23.584515 master-0 kubenswrapper[4169]: E0219 03:04:23.584138 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:39.584121983 +0000 UTC m=+103.330313718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:24.226919 master-0 kubenswrapper[4169]: I0219 03:04:24.226861 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:24.227187 master-0 kubenswrapper[4169]: E0219 03:04:24.227061 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:24.257308 master-0 kubenswrapper[4169]: I0219 03:04:24.257234 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 19 03:04:24.491441 master-0 kubenswrapper[4169]: I0219 03:04:24.491304 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:24.491682 master-0 kubenswrapper[4169]: E0219 03:04:24.491500 4169 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:24.491682 master-0 kubenswrapper[4169]: E0219 03:04:24.491601 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.491571279 +0000 UTC m=+120.237763054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 03:04:25.227562 master-0 kubenswrapper[4169]: I0219 03:04:25.226589 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:25.227562 master-0 kubenswrapper[4169]: E0219 03:04:25.226803 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:25.519186 master-0 kubenswrapper[4169]: I0219 03:04:25.519086 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a18d99c878639b9d3805f870752927c3437cf7b6b29a033142fd63915d0b18e8" exitCode=0 Feb 19 03:04:25.519186 master-0 kubenswrapper[4169]: I0219 03:04:25.519175 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"a18d99c878639b9d3805f870752927c3437cf7b6b29a033142fd63915d0b18e8"} Feb 19 03:04:25.520946 master-0 kubenswrapper[4169]: I0219 03:04:25.520686 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" exitCode=0 Feb 19 03:04:25.520946 master-0 kubenswrapper[4169]: I0219 03:04:25.520730 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} Feb 19 03:04:25.524438 master-0 kubenswrapper[4169]: I0219 03:04:25.524391 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerStarted","Data":"0f3766857d0863e0c7bf5650275239873c534f3ae3d01d3445961163b616988a"} Feb 19 03:04:25.573868 master-0 kubenswrapper[4169]: I0219 03:04:25.573762 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" podStartSLOduration=3.257984971 podStartE2EDuration="21.573726361s" podCreationTimestamp="2026-02-19 03:04:04 +0000 UTC" firstStartedPulling="2026-02-19 03:04:06.186914586 +0000 UTC m=+69.933106341" lastFinishedPulling="2026-02-19 03:04:24.502655996 +0000 UTC m=+88.248847731" observedRunningTime="2026-02-19 03:04:25.572779916 +0000 UTC m=+89.318971651" watchObservedRunningTime="2026-02-19 03:04:25.573726361 +0000 UTC m=+89.319918116" Feb 19 03:04:25.574208 master-0 kubenswrapper[4169]: I0219 03:04:25.574127 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=1.5740852410000001 podStartE2EDuration="1.574085241s" podCreationTimestamp="2026-02-19 03:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:25.555912554 +0000 UTC m=+89.302104369" watchObservedRunningTime="2026-02-19 03:04:25.574085241 +0000 UTC m=+89.320277026" Feb 19 03:04:26.227235 master-0 kubenswrapper[4169]: I0219 03:04:26.226583 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:26.227235 master-0 kubenswrapper[4169]: E0219 03:04:26.226760 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:26.530773 master-0 kubenswrapper[4169]: I0219 03:04:26.530724 4169 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="df79d74c2fc5980bfc6e9850c3ffca3b314448c7df3cef006d2546392b263b4e" exitCode=0 Feb 19 03:04:26.531535 master-0 kubenswrapper[4169]: I0219 03:04:26.530804 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerDied","Data":"df79d74c2fc5980bfc6e9850c3ffca3b314448c7df3cef006d2546392b263b4e"} Feb 19 03:04:26.534842 master-0 kubenswrapper[4169]: I0219 03:04:26.534772 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} Feb 19 03:04:26.534842 master-0 kubenswrapper[4169]: I0219 03:04:26.534802 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} Feb 19 03:04:26.534842 master-0 kubenswrapper[4169]: I0219 03:04:26.534815 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} Feb 19 03:04:26.534842 master-0 kubenswrapper[4169]: I0219 03:04:26.534826 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} Feb 19 03:04:26.534842 master-0 kubenswrapper[4169]: I0219 03:04:26.534841 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:26.535042 master-0 kubenswrapper[4169]: I0219 03:04:26.534852 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:27.226201 master-0 kubenswrapper[4169]: I0219 03:04:27.226116 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:27.227139 master-0 kubenswrapper[4169]: E0219 03:04:27.227089 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:27.539726 master-0 kubenswrapper[4169]: I0219 03:04:27.539580 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerStarted","Data":"f6706a38252937f6734b664a0f078763a45b428cf03e52f78ca141868385452d"} Feb 19 03:04:27.539726 master-0 kubenswrapper[4169]: I0219 03:04:27.539665 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerStarted","Data":"110ad15e5221ce48ace3075a78e0e079f95d63430bd06e9e3cc32be3b1b49b73"} Feb 19 03:04:27.545213 master-0 kubenswrapper[4169]: I0219 03:04:27.545157 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" event={"ID":"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7","Type":"ContainerStarted","Data":"f4a568c68d0cda74ea56c69df14645b2ea50c53f8b0e83ed2a947f238d5f1b0a"} Feb 19 03:04:27.575451 master-0 kubenswrapper[4169]: I0219 03:04:27.575342 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-rm5jg" podStartSLOduration=4.411740875 podStartE2EDuration="18.575325572s" podCreationTimestamp="2026-02-19 03:04:09 +0000 UTC" firstStartedPulling="2026-02-19 03:04:12.416849283 +0000 UTC m=+76.163041018" lastFinishedPulling="2026-02-19 03:04:26.58043398 +0000 UTC m=+90.326625715" observedRunningTime="2026-02-19 03:04:27.55213481 +0000 UTC m=+91.298326625" watchObservedRunningTime="2026-02-19 03:04:27.575325572 +0000 UTC m=+91.321517307" Feb 19 03:04:28.226781 master-0 kubenswrapper[4169]: I0219 03:04:28.226677 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:28.227031 master-0 kubenswrapper[4169]: E0219 03:04:28.226893 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:28.553482 master-0 kubenswrapper[4169]: I0219 03:04:28.553349 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} Feb 19 03:04:29.226981 master-0 kubenswrapper[4169]: I0219 03:04:29.226918 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:29.227213 master-0 kubenswrapper[4169]: E0219 03:04:29.227108 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:30.226676 master-0 kubenswrapper[4169]: I0219 03:04:30.226589 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:30.228879 master-0 kubenswrapper[4169]: E0219 03:04:30.226842 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:30.567558 master-0 kubenswrapper[4169]: I0219 03:04:30.566914 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerStarted","Data":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} Feb 19 03:04:30.567558 master-0 kubenswrapper[4169]: I0219 03:04:30.567239 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:30.567558 master-0 kubenswrapper[4169]: I0219 03:04:30.567321 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:30.588835 master-0 kubenswrapper[4169]: I0219 03:04:30.588763 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bs5qd" podStartSLOduration=7.357592798 podStartE2EDuration="39.588746328s" podCreationTimestamp="2026-02-19 03:03:51 +0000 UTC" firstStartedPulling="2026-02-19 03:03:52.191643085 +0000 UTC m=+55.937834840" lastFinishedPulling="2026-02-19 03:04:24.422796635 +0000 UTC m=+88.168988370" observedRunningTime="2026-02-19 03:04:27.575717382 +0000 UTC m=+91.321909197" watchObservedRunningTime="2026-02-19 03:04:30.588746328 +0000 UTC m=+94.334938063" Feb 19 03:04:30.589430 master-0 kubenswrapper[4169]: I0219 03:04:30.588360 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:30.613532 master-0 kubenswrapper[4169]: I0219 03:04:30.613436 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podStartSLOduration=8.1652346 podStartE2EDuration="26.613414979s" podCreationTimestamp="2026-02-19 03:04:04 +0000 UTC" firstStartedPulling="2026-02-19 03:04:06.024550265 +0000 UTC m=+69.770742020" lastFinishedPulling="2026-02-19 03:04:24.472730664 +0000 UTC m=+88.218922399" observedRunningTime="2026-02-19 03:04:30.589523809 +0000 UTC m=+94.335715564" watchObservedRunningTime="2026-02-19 03:04:30.613414979 +0000 UTC m=+94.359606724" Feb 19 03:04:31.226203 master-0 kubenswrapper[4169]: I0219 03:04:31.226138 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:31.226457 master-0 kubenswrapper[4169]: E0219 03:04:31.226271 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:31.570059 master-0 kubenswrapper[4169]: I0219 03:04:31.569845 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:31.594058 master-0 kubenswrapper[4169]: I0219 03:04:31.593995 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:32.226597 master-0 kubenswrapper[4169]: I0219 03:04:32.226504 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:32.227018 master-0 kubenswrapper[4169]: E0219 03:04:32.226721 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:32.779471 master-0 kubenswrapper[4169]: I0219 03:04:32.779399 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hspwc"] Feb 19 03:04:32.780035 master-0 kubenswrapper[4169]: I0219 03:04:32.779529 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:32.780035 master-0 kubenswrapper[4169]: E0219 03:04:32.779658 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:32.781482 master-0 kubenswrapper[4169]: I0219 03:04:32.781436 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-c6c25"] Feb 19 03:04:32.781601 master-0 kubenswrapper[4169]: I0219 03:04:32.781567 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:32.781697 master-0 kubenswrapper[4169]: E0219 03:04:32.781662 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:33.238060 master-0 kubenswrapper[4169]: W0219 03:04:33.237962 4169 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 19 03:04:33.238395 master-0 kubenswrapper[4169]: I0219 03:04:33.238183 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 19 03:04:33.349583 master-0 kubenswrapper[4169]: I0219 03:04:33.349492 4169 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncfjn"] Feb 19 03:04:34.801251 master-0 kubenswrapper[4169]: I0219 03:04:34.801111 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:34.801786 master-0 kubenswrapper[4169]: E0219 03:04:34.801281 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:34.801786 master-0 kubenswrapper[4169]: I0219 03:04:34.801321 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:34.801786 master-0 kubenswrapper[4169]: E0219 03:04:34.801437 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815124 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-controller" containerID="cri-o://444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815620 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="sbdb" containerID="cri-o://42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815718 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="nbdb" containerID="cri-o://64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815751 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="northd" containerID="cri-o://aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815782 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815811 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-node" containerID="cri-o://f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" gracePeriod=30 Feb 19 03:04:34.822697 master-0 kubenswrapper[4169]: I0219 03:04:34.815855 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-acl-logging" containerID="cri-o://6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" gracePeriod=30 Feb 19 03:04:34.841307 master-0 kubenswrapper[4169]: I0219 03:04:34.839477 4169 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovnkube-controller" containerID="cri-o://ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" gracePeriod=30 Feb 19 03:04:35.022012 master-0 kubenswrapper[4169]: E0219 03:04:35.021924 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db is running failed: container process not found" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 03:04:35.022134 master-0 kubenswrapper[4169]: E0219 03:04:35.022095 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d is running failed: container process not found" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 03:04:35.022176 master-0 kubenswrapper[4169]: E0219 03:04:35.022148 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 is running failed: container process not found" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 03:04:35.022600 master-0 kubenswrapper[4169]: E0219 03:04:35.022561 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d is running failed: container process not found" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 03:04:35.022761 master-0 kubenswrapper[4169]: E0219 03:04:35.022725 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db is running failed: container process not found" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 03:04:35.023051 master-0 kubenswrapper[4169]: E0219 03:04:35.023003 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d is running failed: container process not found" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 03:04:35.023051 master-0 kubenswrapper[4169]: E0219 03:04:35.023035 4169 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="sbdb" Feb 19 03:04:35.023388 master-0 kubenswrapper[4169]: E0219 03:04:35.023342 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db is running failed: container process not found" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 03:04:35.023469 master-0 kubenswrapper[4169]: E0219 03:04:35.023387 4169 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="nbdb" Feb 19 03:04:35.023565 master-0 kubenswrapper[4169]: E0219 03:04:35.023525 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 is running failed: container process not found" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 03:04:35.023901 master-0 kubenswrapper[4169]: E0219 03:04:35.023858 4169 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 is running failed: container process not found" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 03:04:35.023958 master-0 kubenswrapper[4169]: E0219 03:04:35.023932 4169 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovnkube-controller" Feb 19 03:04:35.098525 master-0 kubenswrapper[4169]: I0219 03:04:35.098483 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovnkube-controller/0.log" Feb 19 03:04:35.100275 master-0 kubenswrapper[4169]: I0219 03:04:35.100225 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/kube-rbac-proxy-ovn-metrics/0.log" Feb 19 03:04:35.100735 master-0 kubenswrapper[4169]: I0219 03:04:35.100650 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/kube-rbac-proxy-node/0.log" Feb 19 03:04:35.101123 master-0 kubenswrapper[4169]: I0219 03:04:35.101088 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovn-acl-logging/0.log" Feb 19 03:04:35.101547 master-0 kubenswrapper[4169]: I0219 03:04:35.101515 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovn-controller/0.log" Feb 19 03:04:35.102024 master-0 kubenswrapper[4169]: I0219 03:04:35.101982 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:35.127173 master-0 kubenswrapper[4169]: I0219 03:04:35.127037 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.127009606 podStartE2EDuration="2.127009606s" podCreationTimestamp="2026-02-19 03:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:34.834480832 +0000 UTC m=+98.580672557" watchObservedRunningTime="2026-02-19 03:04:35.127009606 +0000 UTC m=+98.873201381" Feb 19 03:04:35.155049 master-0 kubenswrapper[4169]: I0219 03:04:35.154966 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pw7dx"] Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155108 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="sbdb" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155129 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="sbdb" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155144 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovnkube-controller" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155157 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovnkube-controller" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155172 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kubecfg-setup" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155185 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kubecfg-setup" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155199 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-node" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155210 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-node" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155225 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="nbdb" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155236 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="nbdb" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155249 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-controller" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155294 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-controller" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: E0219 03:04:35.155312 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-acl-logging" Feb 19 03:04:35.155328 master-0 kubenswrapper[4169]: I0219 03:04:35.155329 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-acl-logging" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: E0219 03:04:35.155347 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155365 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: E0219 03:04:35.155386 4169 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="northd" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155403 4169 state_mem.go:107] "Deleted CPUSet assignment" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="northd" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155481 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155498 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-acl-logging" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155511 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="sbdb" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155522 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="kube-rbac-proxy-node" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155536 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="nbdb" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155549 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovn-controller" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155561 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="ovnkube-controller" Feb 19 03:04:35.156104 master-0 kubenswrapper[4169]: I0219 03:04:35.155574 4169 memory_manager.go:354] "RemoveStaleState removing state" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerName="northd" Feb 19 03:04:35.156909 master-0 kubenswrapper[4169]: I0219 03:04:35.156825 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.205416 master-0 kubenswrapper[4169]: I0219 03:04:35.205337 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.205416 master-0 kubenswrapper[4169]: I0219 03:04:35.205391 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.205416 master-0 kubenswrapper[4169]: I0219 03:04:35.205411 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rkrc\" (UniqueName: \"kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205471 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205576 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205681 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205721 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205777 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205782 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205802 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205889 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205921 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205942 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205967 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.205970 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.206047 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.206075 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.206192 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.206213 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.206621 master-0 kubenswrapper[4169]: I0219 03:04:35.206247 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206308 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206336 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206341 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206370 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206381 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash" (OuterVolumeSpecName: "host-slash") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206397 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206407 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log" (OuterVolumeSpecName: "node-log") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206426 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206455 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206482 4169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet\") pod \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\" (UID: \"429773fe-5f3f-45d0-a13b-04efaa74ce9a\") " Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206561 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206706 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206736 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206768 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206798 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206823 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.207613 master-0 kubenswrapper[4169]: I0219 03:04:35.206851 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206879 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206904 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206929 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206959 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206986 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207021 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.205534 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.206838 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207022 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket" (OuterVolumeSpecName: "log-socket") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207043 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207071 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207048 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207137 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207139 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207226 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.208367 master-0 kubenswrapper[4169]: I0219 03:04:35.207311 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207369 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207398 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207434 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207390 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207624 4169 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207660 4169 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207678 4169 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207695 4169 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207711 4169 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207727 4169 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207745 4169 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207763 4169 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207781 4169 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207796 4169 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-node-log\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207811 4169 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207828 4169 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207843 4169 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207860 4169 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207877 4169 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.209092 master-0 kubenswrapper[4169]: I0219 03:04:35.207895 4169 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/429773fe-5f3f-45d0-a13b-04efaa74ce9a-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.210536 master-0 kubenswrapper[4169]: I0219 03:04:35.210474 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc" (OuterVolumeSpecName: "kube-api-access-8rkrc") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "kube-api-access-8rkrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:04:35.211465 master-0 kubenswrapper[4169]: I0219 03:04:35.211419 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:04:35.213724 master-0 kubenswrapper[4169]: I0219 03:04:35.213667 4169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "429773fe-5f3f-45d0-a13b-04efaa74ce9a" (UID: "429773fe-5f3f-45d0-a13b-04efaa74ce9a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:04:35.308637 master-0 kubenswrapper[4169]: I0219 03:04:35.308551 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.308637 master-0 kubenswrapper[4169]: I0219 03:04:35.308618 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308642 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308694 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308714 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308728 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308778 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308808 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308897 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308957 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.308988 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.309027 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309034 master-0 kubenswrapper[4169]: I0219 03:04:35.309050 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309080 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309090 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309106 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309149 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309182 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309217 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309219 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309278 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309283 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309340 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309365 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309388 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309406 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309424 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309455 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309484 4169 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309497 4169 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/429773fe-5f3f-45d0-a13b-04efaa74ce9a-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.309851 master-0 kubenswrapper[4169]: I0219 03:04:35.309415 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309509 4169 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/429773fe-5f3f-45d0-a13b-04efaa74ce9a-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309538 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309560 4169 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rkrc\" (UniqueName: \"kubernetes.io/projected/429773fe-5f3f-45d0-a13b-04efaa74ce9a-kube-api-access-8rkrc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309570 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309570 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309613 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309639 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.309593 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.310157 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.310217 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.310961 master-0 kubenswrapper[4169]: I0219 03:04:35.310783 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.312367 master-0 kubenswrapper[4169]: I0219 03:04:35.312308 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.693314 master-0 kubenswrapper[4169]: I0219 03:04:35.691100 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.782991 master-0 kubenswrapper[4169]: I0219 03:04:35.782929 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:35.796835 master-0 kubenswrapper[4169]: W0219 03:04:35.796777 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda06b88f6_101e_47bf_a6cf_f5fcfa47ad2a.slice/crio-05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239 WatchSource:0}: Error finding container 05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239: Status 404 returned error can't find the container with id 05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239 Feb 19 03:04:35.822749 master-0 kubenswrapper[4169]: I0219 03:04:35.822679 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.824188 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovnkube-controller/0.log" Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.826190 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/kube-rbac-proxy-ovn-metrics/0.log" Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.826922 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/kube-rbac-proxy-node/0.log" Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.827432 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovn-acl-logging/0.log" Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.828002 4169 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncfjn_429773fe-5f3f-45d0-a13b-04efaa74ce9a/ovn-controller/0.log" Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829114 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" exitCode=1 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829145 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" exitCode=0 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829159 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" exitCode=0 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829174 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" exitCode=0 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829189 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" exitCode=143 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829205 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" exitCode=143 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829224 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" exitCode=143 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829238 4169 generic.go:334] "Generic (PLEG): container finished" podID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" exitCode=143 Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829293 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829382 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829405 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829424 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829445 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829464 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829483 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829637 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829651 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829667 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829684 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829697 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829707 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} Feb 19 03:04:35.836231 master-0 kubenswrapper[4169]: I0219 03:04:35.829718 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829729 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829740 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829750 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829762 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829772 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829788 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829804 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829835 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829848 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829860 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829871 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829882 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829893 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829903 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829914 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829929 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" event={"ID":"429773fe-5f3f-45d0-a13b-04efaa74ce9a","Type":"ContainerDied","Data":"e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829945 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829957 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829968 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829979 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.829990 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830000 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830010 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830023 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830034 4169 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830056 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:35.838638 master-0 kubenswrapper[4169]: I0219 03:04:35.830297 4169 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncfjn" Feb 19 03:04:35.850344 master-0 kubenswrapper[4169]: I0219 03:04:35.850300 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:35.871197 master-0 kubenswrapper[4169]: I0219 03:04:35.870959 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:35.882477 master-0 kubenswrapper[4169]: I0219 03:04:35.881668 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:35.891433 master-0 kubenswrapper[4169]: I0219 03:04:35.891249 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:35.903877 master-0 kubenswrapper[4169]: I0219 03:04:35.903812 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:35.917040 master-0 kubenswrapper[4169]: I0219 03:04:35.916984 4169 scope.go:117] "RemoveContainer" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:35.918895 master-0 kubenswrapper[4169]: I0219 03:04:35.918855 4169 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncfjn"] Feb 19 03:04:35.925271 master-0 kubenswrapper[4169]: I0219 03:04:35.925208 4169 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncfjn"] Feb 19 03:04:35.986996 master-0 kubenswrapper[4169]: I0219 03:04:35.986946 4169 scope.go:117] "RemoveContainer" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.000934 master-0 kubenswrapper[4169]: I0219 03:04:36.000900 4169 scope.go:117] "RemoveContainer" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.014431 master-0 kubenswrapper[4169]: I0219 03:04:36.014384 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.015026 master-0 kubenswrapper[4169]: E0219 03:04:36.014966 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.015135 master-0 kubenswrapper[4169]: I0219 03:04:36.015016 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} err="failed to get container status \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" Feb 19 03:04:36.015135 master-0 kubenswrapper[4169]: I0219 03:04:36.015048 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.015567 master-0 kubenswrapper[4169]: E0219 03:04:36.015523 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.015567 master-0 kubenswrapper[4169]: I0219 03:04:36.015549 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} err="failed to get container status \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" Feb 19 03:04:36.015567 master-0 kubenswrapper[4169]: I0219 03:04:36.015564 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.015920 master-0 kubenswrapper[4169]: E0219 03:04:36.015874 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.016003 master-0 kubenswrapper[4169]: I0219 03:04:36.015914 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} err="failed to get container status \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" Feb 19 03:04:36.016003 master-0 kubenswrapper[4169]: I0219 03:04:36.015942 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.016466 master-0 kubenswrapper[4169]: E0219 03:04:36.016426 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.016466 master-0 kubenswrapper[4169]: I0219 03:04:36.016450 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} err="failed to get container status \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" Feb 19 03:04:36.016466 master-0 kubenswrapper[4169]: I0219 03:04:36.016467 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.016801 master-0 kubenswrapper[4169]: E0219 03:04:36.016746 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.016879 master-0 kubenswrapper[4169]: I0219 03:04:36.016795 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} err="failed to get container status \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" Feb 19 03:04:36.016879 master-0 kubenswrapper[4169]: I0219 03:04:36.016826 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.017347 master-0 kubenswrapper[4169]: E0219 03:04:36.017303 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.017347 master-0 kubenswrapper[4169]: I0219 03:04:36.017335 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} err="failed to get container status \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" Feb 19 03:04:36.017524 master-0 kubenswrapper[4169]: I0219 03:04:36.017357 4169 scope.go:117] "RemoveContainer" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:36.017721 master-0 kubenswrapper[4169]: E0219 03:04:36.017678 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": container with ID starting with 6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688 not found: ID does not exist" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:36.017721 master-0 kubenswrapper[4169]: I0219 03:04:36.017708 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} err="failed to get container status \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": rpc error: code = NotFound desc = could not find container \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": container with ID starting with 6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688 not found: ID does not exist" Feb 19 03:04:36.017861 master-0 kubenswrapper[4169]: I0219 03:04:36.017728 4169 scope.go:117] "RemoveContainer" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.018045 master-0 kubenswrapper[4169]: E0219 03:04:36.018001 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": container with ID starting with 444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b not found: ID does not exist" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.018045 master-0 kubenswrapper[4169]: I0219 03:04:36.018032 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} err="failed to get container status \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": rpc error: code = NotFound desc = could not find container \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": container with ID starting with 444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b not found: ID does not exist" Feb 19 03:04:36.018181 master-0 kubenswrapper[4169]: I0219 03:04:36.018050 4169 scope.go:117] "RemoveContainer" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.018388 master-0 kubenswrapper[4169]: E0219 03:04:36.018333 4169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": container with ID starting with d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880 not found: ID does not exist" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.018465 master-0 kubenswrapper[4169]: I0219 03:04:36.018381 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} err="failed to get container status \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": rpc error: code = NotFound desc = could not find container \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": container with ID starting with d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880 not found: ID does not exist" Feb 19 03:04:36.018465 master-0 kubenswrapper[4169]: I0219 03:04:36.018411 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.018755 master-0 kubenswrapper[4169]: I0219 03:04:36.018703 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} err="failed to get container status \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" Feb 19 03:04:36.018755 master-0 kubenswrapper[4169]: I0219 03:04:36.018733 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.019029 master-0 kubenswrapper[4169]: I0219 03:04:36.018995 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} err="failed to get container status \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" Feb 19 03:04:36.019029 master-0 kubenswrapper[4169]: I0219 03:04:36.019019 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.019378 master-0 kubenswrapper[4169]: I0219 03:04:36.019339 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} err="failed to get container status \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" Feb 19 03:04:36.019378 master-0 kubenswrapper[4169]: I0219 03:04:36.019373 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.019612 master-0 kubenswrapper[4169]: I0219 03:04:36.019582 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} err="failed to get container status \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" Feb 19 03:04:36.019612 master-0 kubenswrapper[4169]: I0219 03:04:36.019601 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.020023 master-0 kubenswrapper[4169]: I0219 03:04:36.019979 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} err="failed to get container status \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" Feb 19 03:04:36.020023 master-0 kubenswrapper[4169]: I0219 03:04:36.019999 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.020301 master-0 kubenswrapper[4169]: I0219 03:04:36.020272 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} err="failed to get container status \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" Feb 19 03:04:36.020301 master-0 kubenswrapper[4169]: I0219 03:04:36.020291 4169 scope.go:117] "RemoveContainer" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:36.020528 master-0 kubenswrapper[4169]: I0219 03:04:36.020493 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} err="failed to get container status \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": rpc error: code = NotFound desc = could not find container \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": container with ID starting with 6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688 not found: ID does not exist" Feb 19 03:04:36.020528 master-0 kubenswrapper[4169]: I0219 03:04:36.020521 4169 scope.go:117] "RemoveContainer" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.020808 master-0 kubenswrapper[4169]: I0219 03:04:36.020763 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} err="failed to get container status \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": rpc error: code = NotFound desc = could not find container \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": container with ID starting with 444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b not found: ID does not exist" Feb 19 03:04:36.020808 master-0 kubenswrapper[4169]: I0219 03:04:36.020804 4169 scope.go:117] "RemoveContainer" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.021162 master-0 kubenswrapper[4169]: I0219 03:04:36.021135 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} err="failed to get container status \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": rpc error: code = NotFound desc = could not find container \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": container with ID starting with d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880 not found: ID does not exist" Feb 19 03:04:36.021162 master-0 kubenswrapper[4169]: I0219 03:04:36.021154 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.021511 master-0 kubenswrapper[4169]: I0219 03:04:36.021484 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} err="failed to get container status \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" Feb 19 03:04:36.021511 master-0 kubenswrapper[4169]: I0219 03:04:36.021500 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.021799 master-0 kubenswrapper[4169]: I0219 03:04:36.021772 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} err="failed to get container status \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" Feb 19 03:04:36.021799 master-0 kubenswrapper[4169]: I0219 03:04:36.021789 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.022144 master-0 kubenswrapper[4169]: I0219 03:04:36.022117 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} err="failed to get container status \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" Feb 19 03:04:36.022144 master-0 kubenswrapper[4169]: I0219 03:04:36.022131 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.022387 master-0 kubenswrapper[4169]: I0219 03:04:36.022349 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} err="failed to get container status \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" Feb 19 03:04:36.022387 master-0 kubenswrapper[4169]: I0219 03:04:36.022367 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.022597 master-0 kubenswrapper[4169]: I0219 03:04:36.022571 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} err="failed to get container status \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" Feb 19 03:04:36.022597 master-0 kubenswrapper[4169]: I0219 03:04:36.022587 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.022903 master-0 kubenswrapper[4169]: I0219 03:04:36.022878 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} err="failed to get container status \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" Feb 19 03:04:36.022903 master-0 kubenswrapper[4169]: I0219 03:04:36.022894 4169 scope.go:117] "RemoveContainer" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:36.023128 master-0 kubenswrapper[4169]: I0219 03:04:36.023102 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} err="failed to get container status \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": rpc error: code = NotFound desc = could not find container \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": container with ID starting with 6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688 not found: ID does not exist" Feb 19 03:04:36.023128 master-0 kubenswrapper[4169]: I0219 03:04:36.023120 4169 scope.go:117] "RemoveContainer" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.023462 master-0 kubenswrapper[4169]: I0219 03:04:36.023437 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} err="failed to get container status \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": rpc error: code = NotFound desc = could not find container \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": container with ID starting with 444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b not found: ID does not exist" Feb 19 03:04:36.023462 master-0 kubenswrapper[4169]: I0219 03:04:36.023454 4169 scope.go:117] "RemoveContainer" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.023678 master-0 kubenswrapper[4169]: I0219 03:04:36.023659 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} err="failed to get container status \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": rpc error: code = NotFound desc = could not find container \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": container with ID starting with d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880 not found: ID does not exist" Feb 19 03:04:36.023765 master-0 kubenswrapper[4169]: I0219 03:04:36.023679 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.023982 master-0 kubenswrapper[4169]: I0219 03:04:36.023957 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} err="failed to get container status \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" Feb 19 03:04:36.023982 master-0 kubenswrapper[4169]: I0219 03:04:36.023972 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.024200 master-0 kubenswrapper[4169]: I0219 03:04:36.024175 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} err="failed to get container status \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" Feb 19 03:04:36.024200 master-0 kubenswrapper[4169]: I0219 03:04:36.024191 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.024679 master-0 kubenswrapper[4169]: I0219 03:04:36.024632 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} err="failed to get container status \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" Feb 19 03:04:36.024679 master-0 kubenswrapper[4169]: I0219 03:04:36.024649 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.024932 master-0 kubenswrapper[4169]: I0219 03:04:36.024896 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} err="failed to get container status \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" Feb 19 03:04:36.024932 master-0 kubenswrapper[4169]: I0219 03:04:36.024914 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.025172 master-0 kubenswrapper[4169]: I0219 03:04:36.025137 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} err="failed to get container status \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" Feb 19 03:04:36.025172 master-0 kubenswrapper[4169]: I0219 03:04:36.025158 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.025500 master-0 kubenswrapper[4169]: I0219 03:04:36.025463 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} err="failed to get container status \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" Feb 19 03:04:36.025500 master-0 kubenswrapper[4169]: I0219 03:04:36.025480 4169 scope.go:117] "RemoveContainer" containerID="6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688" Feb 19 03:04:36.025699 master-0 kubenswrapper[4169]: I0219 03:04:36.025664 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688"} err="failed to get container status \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": rpc error: code = NotFound desc = could not find container \"6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688\": container with ID starting with 6ca2acef210322dec91f19ef67ff46067d5af7b8698c9f2d020a6f85b23f1688 not found: ID does not exist" Feb 19 03:04:36.025699 master-0 kubenswrapper[4169]: I0219 03:04:36.025685 4169 scope.go:117] "RemoveContainer" containerID="444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b" Feb 19 03:04:36.026018 master-0 kubenswrapper[4169]: I0219 03:04:36.025986 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b"} err="failed to get container status \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": rpc error: code = NotFound desc = could not find container \"444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b\": container with ID starting with 444302e811682b855dc368ce37bb019d274f3277a0023bbd8405214dca6fcb5b not found: ID does not exist" Feb 19 03:04:36.026018 master-0 kubenswrapper[4169]: I0219 03:04:36.026004 4169 scope.go:117] "RemoveContainer" containerID="d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880" Feb 19 03:04:36.026333 master-0 kubenswrapper[4169]: I0219 03:04:36.026303 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880"} err="failed to get container status \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": rpc error: code = NotFound desc = could not find container \"d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880\": container with ID starting with d9199401bfef3f455119ef2f45efbb2bdb0358d5b819f495d66fe1a0b6645880 not found: ID does not exist" Feb 19 03:04:36.026333 master-0 kubenswrapper[4169]: I0219 03:04:36.026327 4169 scope.go:117] "RemoveContainer" containerID="ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128" Feb 19 03:04:36.026575 master-0 kubenswrapper[4169]: I0219 03:04:36.026553 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128"} err="failed to get container status \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": rpc error: code = NotFound desc = could not find container \"ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128\": container with ID starting with ed726c91be86df4b5c2380935589ef40e1439cf782575ba88f48c051e32d4128 not found: ID does not exist" Feb 19 03:04:36.026575 master-0 kubenswrapper[4169]: I0219 03:04:36.026570 4169 scope.go:117] "RemoveContainer" containerID="42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d" Feb 19 03:04:36.026860 master-0 kubenswrapper[4169]: I0219 03:04:36.026837 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d"} err="failed to get container status \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": rpc error: code = NotFound desc = could not find container \"42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d\": container with ID starting with 42c70943f60c07d8a3c968bc60755de166f6e1cea5c079983e102dcc1629d88d not found: ID does not exist" Feb 19 03:04:36.026860 master-0 kubenswrapper[4169]: I0219 03:04:36.026855 4169 scope.go:117] "RemoveContainer" containerID="64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db" Feb 19 03:04:36.027120 master-0 kubenswrapper[4169]: I0219 03:04:36.027085 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db"} err="failed to get container status \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": rpc error: code = NotFound desc = could not find container \"64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db\": container with ID starting with 64745f56d85142f8486c5ac126d85780d05b994c8d3dd8ac4b4dcc64109580db not found: ID does not exist" Feb 19 03:04:36.027188 master-0 kubenswrapper[4169]: I0219 03:04:36.027119 4169 scope.go:117] "RemoveContainer" containerID="aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551" Feb 19 03:04:36.027411 master-0 kubenswrapper[4169]: I0219 03:04:36.027387 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551"} err="failed to get container status \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": rpc error: code = NotFound desc = could not find container \"aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551\": container with ID starting with aaf03bfd74fb220c7d21cea5407679f6c36e2e10530155bf583457d7a2291551 not found: ID does not exist" Feb 19 03:04:36.027411 master-0 kubenswrapper[4169]: I0219 03:04:36.027406 4169 scope.go:117] "RemoveContainer" containerID="678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a" Feb 19 03:04:36.027663 master-0 kubenswrapper[4169]: I0219 03:04:36.027600 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a"} err="failed to get container status \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": rpc error: code = NotFound desc = could not find container \"678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a\": container with ID starting with 678b4d5052efcbec34597751d4e00ee5a4bd7d1f5d8fc8f116205cf8a049899a not found: ID does not exist" Feb 19 03:04:36.027663 master-0 kubenswrapper[4169]: I0219 03:04:36.027619 4169 scope.go:117] "RemoveContainer" containerID="f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98" Feb 19 03:04:36.027901 master-0 kubenswrapper[4169]: I0219 03:04:36.027865 4169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98"} err="failed to get container status \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": rpc error: code = NotFound desc = could not find container \"f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98\": container with ID starting with f616a92015ec7506baff824e2f8587cad3d8b10d5b878eb5d697d47744c19a98 not found: ID does not exist" Feb 19 03:04:36.226229 master-0 kubenswrapper[4169]: I0219 03:04:36.226131 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:36.226229 master-0 kubenswrapper[4169]: I0219 03:04:36.226180 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:36.226594 master-0 kubenswrapper[4169]: E0219 03:04:36.226340 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:36.226594 master-0 kubenswrapper[4169]: E0219 03:04:36.226479 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:36.837440 master-0 kubenswrapper[4169]: I0219 03:04:36.837375 4169 generic.go:334] "Generic (PLEG): container finished" podID="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" containerID="e1fdaebfc69e9354cdd956d93bd8b91f87df452473c04d8a78f864f320d237fa" exitCode=0 Feb 19 03:04:36.838065 master-0 kubenswrapper[4169]: I0219 03:04:36.837467 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerDied","Data":"e1fdaebfc69e9354cdd956d93bd8b91f87df452473c04d8a78f864f320d237fa"} Feb 19 03:04:37.234662 master-0 kubenswrapper[4169]: I0219 03:04:37.234130 4169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429773fe-5f3f-45d0-a13b-04efaa74ce9a" path="/var/lib/kubelet/pods/429773fe-5f3f-45d0-a13b-04efaa74ce9a/volumes" Feb 19 03:04:37.846268 master-0 kubenswrapper[4169]: I0219 03:04:37.846217 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"6baef81dc02e4a8a50bc2f7e27af2c4a385bcc4cc32f3b51a66a4586e4fde938"} Feb 19 03:04:37.846268 master-0 kubenswrapper[4169]: I0219 03:04:37.846280 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"e4d25dce181104e6accda043eb059db977c3e7073c45a698bb869d59b41fe143"} Feb 19 03:04:37.846268 master-0 kubenswrapper[4169]: I0219 03:04:37.846292 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"541aafb07f23179c282d0d329ba694b710ebeec716bcc78e638ca41226915f43"} Feb 19 03:04:37.846268 master-0 kubenswrapper[4169]: I0219 03:04:37.846302 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"ef5acd874d4b512536c27b573752b21703ed257e1018ac2304fce3ceb48aad30"} Feb 19 03:04:37.847572 master-0 kubenswrapper[4169]: I0219 03:04:37.846312 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"f926e1b12e96161f4f2dc7121795de516839b68e3857b2a28f1759051e688aff"} Feb 19 03:04:38.039007 master-0 kubenswrapper[4169]: I0219 03:04:38.038930 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:38.039178 master-0 kubenswrapper[4169]: E0219 03:04:38.039057 4169 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:38.039178 master-0 kubenswrapper[4169]: E0219 03:04:38.039105 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:05:42.039090375 +0000 UTC m=+165.785282110 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:38.226631 master-0 kubenswrapper[4169]: I0219 03:04:38.226486 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:38.226631 master-0 kubenswrapper[4169]: I0219 03:04:38.226537 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:38.226961 master-0 kubenswrapper[4169]: E0219 03:04:38.226693 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:38.227529 master-0 kubenswrapper[4169]: E0219 03:04:38.226801 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:38.856345 master-0 kubenswrapper[4169]: I0219 03:04:38.856243 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"79bcf9789b309296ba8fb692895598599e6dffbfe919e64cf039cbe2a8aeb832"} Feb 19 03:04:39.652552 master-0 kubenswrapper[4169]: I0219 03:04:39.652482 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:39.652794 master-0 kubenswrapper[4169]: E0219 03:04:39.652668 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 03:04:39.652794 master-0 kubenswrapper[4169]: E0219 03:04:39.652688 4169 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 03:04:39.652794 master-0 kubenswrapper[4169]: E0219 03:04:39.652703 4169 projected.go:194] Error preparing data for projected volume kube-api-access-5q4lp for pod openshift-network-diagnostics/network-check-target-c6c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:39.652794 master-0 kubenswrapper[4169]: E0219 03:04:39.652764 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp podName:4fd49d14-d513-4f68-8a87-3cef8a033c58 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:11.65274701 +0000 UTC m=+135.398938755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5q4lp" (UniqueName: "kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp") pod "network-check-target-c6c25" (UID: "4fd49d14-d513-4f68-8a87-3cef8a033c58") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 03:04:40.226714 master-0 kubenswrapper[4169]: I0219 03:04:40.226615 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:40.228211 master-0 kubenswrapper[4169]: I0219 03:04:40.226724 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:40.228211 master-0 kubenswrapper[4169]: E0219 03:04:40.227718 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:40.228211 master-0 kubenswrapper[4169]: E0219 03:04:40.227935 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:40.635510 master-0 kubenswrapper[4169]: I0219 03:04:40.635433 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 19 03:04:40.866980 master-0 kubenswrapper[4169]: I0219 03:04:40.866902 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"a502d3974e1a444d1f72f08557fa3821d13e473ccb280e8b01aef5bd88ab1d78"} Feb 19 03:04:41.874451 master-0 kubenswrapper[4169]: I0219 03:04:41.873311 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" event={"ID":"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a","Type":"ContainerStarted","Data":"5339bd702cc119a5f278b62d44c04c336a472a915c07f5d5b32128822ac86b47"} Feb 19 03:04:41.874451 master-0 kubenswrapper[4169]: I0219 03:04:41.874422 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:41.875001 master-0 kubenswrapper[4169]: I0219 03:04:41.874467 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:41.875001 master-0 kubenswrapper[4169]: I0219 03:04:41.874709 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:41.909672 master-0 kubenswrapper[4169]: I0219 03:04:41.909619 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:41.909859 master-0 kubenswrapper[4169]: I0219 03:04:41.909725 4169 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:42.226948 master-0 kubenswrapper[4169]: I0219 03:04:42.226838 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:42.227447 master-0 kubenswrapper[4169]: I0219 03:04:42.226958 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:42.227447 master-0 kubenswrapper[4169]: E0219 03:04:42.227195 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:42.227447 master-0 kubenswrapper[4169]: E0219 03:04:42.226994 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:42.250283 master-0 kubenswrapper[4169]: I0219 03:04:42.245711 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" podStartSLOduration=7.245689089 podStartE2EDuration="7.245689089s" podCreationTimestamp="2026-02-19 03:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:42.244790615 +0000 UTC m=+105.990982400" watchObservedRunningTime="2026-02-19 03:04:42.245689089 +0000 UTC m=+105.991880824" Feb 19 03:04:44.100570 master-0 kubenswrapper[4169]: I0219 03:04:44.100474 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=4.100455621 podStartE2EDuration="4.100455621s" podCreationTimestamp="2026-02-19 03:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:42.937864382 +0000 UTC m=+106.684056127" watchObservedRunningTime="2026-02-19 03:04:44.100455621 +0000 UTC m=+107.846647356" Feb 19 03:04:44.226775 master-0 kubenswrapper[4169]: I0219 03:04:44.226716 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:44.226966 master-0 kubenswrapper[4169]: I0219 03:04:44.226787 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:44.226966 master-0 kubenswrapper[4169]: E0219 03:04:44.226889 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:44.227131 master-0 kubenswrapper[4169]: E0219 03:04:44.227088 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:46.227154 master-0 kubenswrapper[4169]: I0219 03:04:46.227060 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:46.227154 master-0 kubenswrapper[4169]: I0219 03:04:46.227102 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:46.228562 master-0 kubenswrapper[4169]: E0219 03:04:46.227227 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:46.228562 master-0 kubenswrapper[4169]: E0219 03:04:46.227405 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:47.241540 master-0 kubenswrapper[4169]: I0219 03:04:47.241468 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 19 03:04:48.226798 master-0 kubenswrapper[4169]: I0219 03:04:48.226676 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:48.227358 master-0 kubenswrapper[4169]: I0219 03:04:48.226691 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:48.227358 master-0 kubenswrapper[4169]: E0219 03:04:48.226923 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-c6c25" podUID="4fd49d14-d513-4f68-8a87-3cef8a033c58" Feb 19 03:04:48.227358 master-0 kubenswrapper[4169]: E0219 03:04:48.227005 4169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hspwc" podUID="6ae2cbe0-aa0a-4f26-994b-660fb962d995" Feb 19 03:04:48.842732 master-0 kubenswrapper[4169]: I0219 03:04:48.842211 4169 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 19 03:04:48.843659 master-0 kubenswrapper[4169]: I0219 03:04:48.842749 4169 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 19 03:04:48.884890 master-0 kubenswrapper[4169]: I0219 03:04:48.884815 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-qcd49"] Feb 19 03:04:48.885708 master-0 kubenswrapper[4169]: I0219 03:04:48.885631 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:48.891110 master-0 kubenswrapper[4169]: I0219 03:04:48.890061 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 03:04:48.904713 master-0 kubenswrapper[4169]: I0219 03:04:48.904626 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc"] Feb 19 03:04:48.905923 master-0 kubenswrapper[4169]: I0219 03:04:48.905857 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v"] Feb 19 03:04:48.906318 master-0 kubenswrapper[4169]: I0219 03:04:48.906285 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:48.916298 master-0 kubenswrapper[4169]: I0219 03:04:48.907704 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:48.916940 master-0 kubenswrapper[4169]: I0219 03:04:48.907739 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-xxdh5"] Feb 19 03:04:48.917354 master-0 kubenswrapper[4169]: I0219 03:04:48.917313 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh"] Feb 19 03:04:48.917576 master-0 kubenswrapper[4169]: I0219 03:04:48.917465 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:48.917678 master-0 kubenswrapper[4169]: I0219 03:04:48.917594 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8"] Feb 19 03:04:48.922280 master-0 kubenswrapper[4169]: I0219 03:04:48.920010 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:48.922280 master-0 kubenswrapper[4169]: I0219 03:04:48.920585 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9"] Feb 19 03:04:48.922280 master-0 kubenswrapper[4169]: I0219 03:04:48.921080 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:48.924199 master-0 kubenswrapper[4169]: I0219 03:04:48.923731 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.927205 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.927339 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.927519 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.927645 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.927752 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.928055 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.928055 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.928812 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.929341 master-0 kubenswrapper[4169]: I0219 03:04:48.929078 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8"] Feb 19 03:04:48.929756 master-0 kubenswrapper[4169]: I0219 03:04:48.929728 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:48.930491 master-0 kubenswrapper[4169]: I0219 03:04:48.930342 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk"] Feb 19 03:04:48.934283 master-0 kubenswrapper[4169]: I0219 03:04:48.930891 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:48.934283 master-0 kubenswrapper[4169]: I0219 03:04:48.931969 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs"] Feb 19 03:04:48.934283 master-0 kubenswrapper[4169]: I0219 03:04:48.932698 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:48.938455 master-0 kubenswrapper[4169]: I0219 03:04:48.938392 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 03:04:48.940772 master-0 kubenswrapper[4169]: I0219 03:04:48.938489 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 03:04:48.948398 master-0 kubenswrapper[4169]: I0219 03:04:48.948295 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj"] Feb 19 03:04:48.948870 master-0 kubenswrapper[4169]: I0219 03:04:48.948831 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 03:04:48.948946 master-0 kubenswrapper[4169]: I0219 03:04:48.948917 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 03:04:48.948998 master-0 kubenswrapper[4169]: I0219 03:04:48.948933 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 03:04:48.949060 master-0 kubenswrapper[4169]: I0219 03:04:48.949043 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 03:04:48.949225 master-0 kubenswrapper[4169]: I0219 03:04:48.949198 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:04:48.949316 master-0 kubenswrapper[4169]: I0219 03:04:48.948924 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:48.949593 master-0 kubenswrapper[4169]: I0219 03:04:48.949446 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 03:04:48.949680 master-0 kubenswrapper[4169]: I0219 03:04:48.949604 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.953761 master-0 kubenswrapper[4169]: I0219 03:04:48.953714 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 03:04:48.955162 master-0 kubenswrapper[4169]: I0219 03:04:48.954855 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 03:04:48.955162 master-0 kubenswrapper[4169]: I0219 03:04:48.955114 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958370 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958431 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958551 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958597 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958638 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958671 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958701 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958736 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958768 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958802 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958818 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958870 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958910 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958955 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:48.959503 master-0 kubenswrapper[4169]: I0219 03:04:48.958989 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959022 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959366 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959397 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959603 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959634 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959658 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959684 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959705 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959727 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959749 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959772 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959794 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959826 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:48.960165 master-0 kubenswrapper[4169]: I0219 03:04:48.959850 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:48.960776 master-0 kubenswrapper[4169]: I0219 03:04:48.959873 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:48.960776 master-0 kubenswrapper[4169]: I0219 03:04:48.959897 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:48.960776 master-0 kubenswrapper[4169]: I0219 03:04:48.959921 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:48.964229 master-0 kubenswrapper[4169]: I0219 03:04:48.964169 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb"] Feb 19 03:04:48.965050 master-0 kubenswrapper[4169]: I0219 03:04:48.965003 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc"] Feb 19 03:04:48.965343 master-0 kubenswrapper[4169]: I0219 03:04:48.965284 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:48.965840 master-0 kubenswrapper[4169]: I0219 03:04:48.965818 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l"] Feb 19 03:04:48.966864 master-0 kubenswrapper[4169]: I0219 03:04:48.966787 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:48.967119 master-0 kubenswrapper[4169]: I0219 03:04:48.967094 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-jlnvw"] Feb 19 03:04:48.967183 master-0 kubenswrapper[4169]: I0219 03:04:48.967112 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 03:04:48.967231 master-0 kubenswrapper[4169]: I0219 03:04:48.967207 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:04:48.967467 master-0 kubenswrapper[4169]: I0219 03:04:48.967431 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:48.968829 master-0 kubenswrapper[4169]: I0219 03:04:48.968800 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7"] Feb 19 03:04:48.969157 master-0 kubenswrapper[4169]: I0219 03:04:48.969132 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:48.969315 master-0 kubenswrapper[4169]: I0219 03:04:48.969269 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t"] Feb 19 03:04:48.969385 master-0 kubenswrapper[4169]: I0219 03:04:48.969363 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:48.969778 master-0 kubenswrapper[4169]: I0219 03:04:48.969752 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:48.969901 master-0 kubenswrapper[4169]: I0219 03:04:48.969760 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 19 03:04:48.969901 master-0 kubenswrapper[4169]: I0219 03:04:48.969887 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 03:04:48.969984 master-0 kubenswrapper[4169]: I0219 03:04:48.969762 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 03:04:48.970050 master-0 kubenswrapper[4169]: I0219 03:04:48.970028 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 03:04:48.970050 master-0 kubenswrapper[4169]: I0219 03:04:48.970044 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 03:04:48.970050 master-0 kubenswrapper[4169]: I0219 03:04:48.970053 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.970122 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.970126 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.970146 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.970189 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.970817 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.971111 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.971371 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 19 03:04:48.974023 master-0 kubenswrapper[4169]: I0219 03:04:48.971381 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.974484 master-0 kubenswrapper[4169]: I0219 03:04:48.974454 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 03:04:48.974521 master-0 kubenswrapper[4169]: I0219 03:04:48.974454 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 03:04:48.974561 master-0 kubenswrapper[4169]: I0219 03:04:48.974517 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:04:48.974561 master-0 kubenswrapper[4169]: I0219 03:04:48.974546 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 03:04:48.974771 master-0 kubenswrapper[4169]: I0219 03:04:48.974694 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq"] Feb 19 03:04:48.974828 master-0 kubenswrapper[4169]: I0219 03:04:48.974798 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.974986 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975095 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975316 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975321 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975359 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975615 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975654 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975666 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975703 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l"] Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.975771 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.976023 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.976022 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 19 03:04:48.976179 master-0 kubenswrapper[4169]: I0219 03:04:48.976129 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:48.976713 master-0 kubenswrapper[4169]: I0219 03:04:48.976611 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:04:48.978223 master-0 kubenswrapper[4169]: I0219 03:04:48.977084 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:48.978223 master-0 kubenswrapper[4169]: I0219 03:04:48.977120 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 03:04:48.978223 master-0 kubenswrapper[4169]: I0219 03:04:48.977196 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 03:04:48.980805 master-0 kubenswrapper[4169]: I0219 03:04:48.980769 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.980902 master-0 kubenswrapper[4169]: I0219 03:04:48.980852 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 03:04:48.980902 master-0 kubenswrapper[4169]: I0219 03:04:48.980879 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 03:04:48.981126 master-0 kubenswrapper[4169]: I0219 03:04:48.981068 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p"] Feb 19 03:04:48.981126 master-0 kubenswrapper[4169]: I0219 03:04:48.981119 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 03:04:48.981205 master-0 kubenswrapper[4169]: I0219 03:04:48.981114 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 03:04:48.981511 master-0 kubenswrapper[4169]: I0219 03:04:48.981301 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 03:04:48.981511 master-0 kubenswrapper[4169]: I0219 03:04:48.981431 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 03:04:48.981748 master-0 kubenswrapper[4169]: I0219 03:04:48.981704 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 03:04:48.981748 master-0 kubenswrapper[4169]: I0219 03:04:48.981705 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:48.981820 master-0 kubenswrapper[4169]: I0219 03:04:48.981771 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:04:48.982086 master-0 kubenswrapper[4169]: I0219 03:04:48.982047 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq"] Feb 19 03:04:48.982479 master-0 kubenswrapper[4169]: I0219 03:04:48.982452 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:48.982778 master-0 kubenswrapper[4169]: I0219 03:04:48.982755 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-qcd49"] Feb 19 03:04:48.983543 master-0 kubenswrapper[4169]: I0219 03:04:48.983517 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v"] Feb 19 03:04:48.991799 master-0 kubenswrapper[4169]: I0219 03:04:48.991479 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc"] Feb 19 03:04:48.992659 master-0 kubenswrapper[4169]: I0219 03:04:48.992548 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 03:04:48.992717 master-0 kubenswrapper[4169]: I0219 03:04:48.992668 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 03:04:48.992752 master-0 kubenswrapper[4169]: I0219 03:04:48.992736 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 03:04:48.992945 master-0 kubenswrapper[4169]: I0219 03:04:48.992832 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.993942 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994107 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk"] Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994147 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994371 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994561 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994689 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994857 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.994965 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.9949523949999999 podStartE2EDuration="1.994952395s" podCreationTimestamp="2026-02-19 03:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:48.994134363 +0000 UTC m=+112.740326098" watchObservedRunningTime="2026-02-19 03:04:48.994952395 +0000 UTC m=+112.741144130" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.996312 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.996471 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.996657 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.996756 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb"] Feb 19 03:04:48.997414 master-0 kubenswrapper[4169]: I0219 03:04:48.996779 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l"] Feb 19 03:04:48.998003 master-0 kubenswrapper[4169]: I0219 03:04:48.997974 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq"] Feb 19 03:04:48.998003 master-0 kubenswrapper[4169]: I0219 03:04:48.997997 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc"] Feb 19 03:04:48.998598 master-0 kubenswrapper[4169]: I0219 03:04:48.998581 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9"] Feb 19 03:04:48.999270 master-0 kubenswrapper[4169]: I0219 03:04:48.999215 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq"] Feb 19 03:04:48.999929 master-0 kubenswrapper[4169]: I0219 03:04:48.999891 4169 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-kvvll"] Feb 19 03:04:49.000337 master-0 kubenswrapper[4169]: I0219 03:04:49.000319 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.001496 master-0 kubenswrapper[4169]: I0219 03:04:49.001444 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-xxdh5"] Feb 19 03:04:49.002947 master-0 kubenswrapper[4169]: I0219 03:04:49.002911 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8"] Feb 19 03:04:49.002947 master-0 kubenswrapper[4169]: I0219 03:04:49.002931 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh"] Feb 19 03:04:49.003048 master-0 kubenswrapper[4169]: I0219 03:04:49.002970 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs"] Feb 19 03:04:49.003868 master-0 kubenswrapper[4169]: I0219 03:04:49.003787 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7"] Feb 19 03:04:49.005732 master-0 kubenswrapper[4169]: I0219 03:04:49.004448 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-jlnvw"] Feb 19 03:04:49.005732 master-0 kubenswrapper[4169]: I0219 03:04:49.005269 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj"] Feb 19 03:04:49.007578 master-0 kubenswrapper[4169]: I0219 03:04:49.006395 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t"] Feb 19 03:04:49.007578 master-0 kubenswrapper[4169]: I0219 03:04:49.006886 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:04:49.007793 master-0 kubenswrapper[4169]: I0219 03:04:49.007766 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8"] Feb 19 03:04:49.008654 master-0 kubenswrapper[4169]: I0219 03:04:49.008562 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p"] Feb 19 03:04:49.009181 master-0 kubenswrapper[4169]: I0219 03:04:49.009142 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l"] Feb 19 03:04:49.010129 master-0 kubenswrapper[4169]: I0219 03:04:49.010091 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 03:04:49.062355 master-0 kubenswrapper[4169]: I0219 03:04:49.062296 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.062355 master-0 kubenswrapper[4169]: I0219 03:04:49.062352 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.062546 master-0 kubenswrapper[4169]: I0219 03:04:49.062384 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:49.062546 master-0 kubenswrapper[4169]: I0219 03:04:49.062403 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.062546 master-0 kubenswrapper[4169]: I0219 03:04:49.062430 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.062546 master-0 kubenswrapper[4169]: I0219 03:04:49.062449 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.062546 master-0 kubenswrapper[4169]: I0219 03:04:49.062528 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062571 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062605 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062629 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062655 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062680 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062705 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.062734 master-0 kubenswrapper[4169]: I0219 03:04:49.062730 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062759 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062803 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062827 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062849 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062868 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062894 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062915 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062940 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062967 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.063103 master-0 kubenswrapper[4169]: I0219 03:04:49.062987 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.063808 master-0 kubenswrapper[4169]: I0219 03:04:49.063574 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.063808 master-0 kubenswrapper[4169]: I0219 03:04:49.063700 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.063808 master-0 kubenswrapper[4169]: I0219 03:04:49.063763 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.063983 master-0 kubenswrapper[4169]: I0219 03:04:49.063944 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.064018 master-0 kubenswrapper[4169]: E0219 03:04:49.063992 4169 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:49.064018 master-0 kubenswrapper[4169]: I0219 03:04:49.064005 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.064075 master-0 kubenswrapper[4169]: E0219 03:04:49.064058 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.564036728 +0000 UTC m=+113.310228463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:49.064190 master-0 kubenswrapper[4169]: I0219 03:04:49.064160 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.064236 master-0 kubenswrapper[4169]: I0219 03:04:49.064210 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.064303 master-0 kubenswrapper[4169]: I0219 03:04:49.064248 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.064303 master-0 kubenswrapper[4169]: I0219 03:04:49.064292 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.064616 master-0 kubenswrapper[4169]: I0219 03:04:49.064513 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.064677 master-0 kubenswrapper[4169]: E0219 03:04:49.064625 4169 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:49.064677 master-0 kubenswrapper[4169]: I0219 03:04:49.064627 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.064866 master-0 kubenswrapper[4169]: E0219 03:04:49.064831 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.564682465 +0000 UTC m=+113.310874200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:49.064920 master-0 kubenswrapper[4169]: I0219 03:04:49.064879 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.064958 master-0 kubenswrapper[4169]: I0219 03:04:49.064942 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.065073 master-0 kubenswrapper[4169]: I0219 03:04:49.064982 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.065119 master-0 kubenswrapper[4169]: I0219 03:04:49.065077 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.065156 master-0 kubenswrapper[4169]: I0219 03:04:49.065140 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.065194 master-0 kubenswrapper[4169]: I0219 03:04:49.065165 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.065230 master-0 kubenswrapper[4169]: I0219 03:04:49.065187 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.065230 master-0 kubenswrapper[4169]: I0219 03:04:49.065215 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.065333 master-0 kubenswrapper[4169]: I0219 03:04:49.065264 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.065333 master-0 kubenswrapper[4169]: I0219 03:04:49.065289 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.065564 master-0 kubenswrapper[4169]: I0219 03:04:49.065374 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.065564 master-0 kubenswrapper[4169]: I0219 03:04:49.065428 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.065564 master-0 kubenswrapper[4169]: I0219 03:04:49.065540 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.065670 master-0 kubenswrapper[4169]: I0219 03:04:49.065582 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.065670 master-0 kubenswrapper[4169]: I0219 03:04:49.065647 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.066270 master-0 kubenswrapper[4169]: I0219 03:04:49.065743 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.066270 master-0 kubenswrapper[4169]: I0219 03:04:49.065834 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.066270 master-0 kubenswrapper[4169]: I0219 03:04:49.066105 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.066270 master-0 kubenswrapper[4169]: I0219 03:04:49.066116 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.066531 master-0 kubenswrapper[4169]: I0219 03:04:49.065883 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.066577 master-0 kubenswrapper[4169]: I0219 03:04:49.066541 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.066614 master-0 kubenswrapper[4169]: I0219 03:04:49.066600 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.066646 master-0 kubenswrapper[4169]: I0219 03:04:49.066621 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.066646 master-0 kubenswrapper[4169]: I0219 03:04:49.066641 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.066718 master-0 kubenswrapper[4169]: I0219 03:04:49.066676 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.066718 master-0 kubenswrapper[4169]: I0219 03:04:49.066698 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.066788 master-0 kubenswrapper[4169]: I0219 03:04:49.066736 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.066788 master-0 kubenswrapper[4169]: I0219 03:04:49.066773 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:49.066860 master-0 kubenswrapper[4169]: I0219 03:04:49.066808 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.066901 master-0 kubenswrapper[4169]: I0219 03:04:49.066858 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.066941 master-0 kubenswrapper[4169]: I0219 03:04:49.066895 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.066941 master-0 kubenswrapper[4169]: I0219 03:04:49.066929 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.067115 master-0 kubenswrapper[4169]: I0219 03:04:49.067088 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.067167 master-0 kubenswrapper[4169]: E0219 03:04:49.067115 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:49.067167 master-0 kubenswrapper[4169]: E0219 03:04:49.067126 4169 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:49.067238 master-0 kubenswrapper[4169]: E0219 03:04:49.067174 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.567160152 +0000 UTC m=+113.313351887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:49.067238 master-0 kubenswrapper[4169]: I0219 03:04:49.067219 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.067341 master-0 kubenswrapper[4169]: E0219 03:04:49.067233 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:49.067341 master-0 kubenswrapper[4169]: E0219 03:04:49.067281 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.567250194 +0000 UTC m=+113.313441929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:49.067341 master-0 kubenswrapper[4169]: E0219 03:04:49.067318 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.567312326 +0000 UTC m=+113.313504061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:49.067341 master-0 kubenswrapper[4169]: I0219 03:04:49.067339 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.067480 master-0 kubenswrapper[4169]: I0219 03:04:49.067374 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.067480 master-0 kubenswrapper[4169]: I0219 03:04:49.067400 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:49.067480 master-0 kubenswrapper[4169]: I0219 03:04:49.067420 4169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.067480 master-0 kubenswrapper[4169]: I0219 03:04:49.067439 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.067480 master-0 kubenswrapper[4169]: I0219 03:04:49.067456 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.067669 master-0 kubenswrapper[4169]: I0219 03:04:49.067603 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.067704 master-0 kubenswrapper[4169]: I0219 03:04:49.067668 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.067704 master-0 kubenswrapper[4169]: I0219 03:04:49.067693 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.068337 master-0 kubenswrapper[4169]: I0219 03:04:49.068300 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.068438 master-0 kubenswrapper[4169]: I0219 03:04:49.068404 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.068816 master-0 kubenswrapper[4169]: I0219 03:04:49.068782 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.069126 master-0 kubenswrapper[4169]: I0219 03:04:49.069093 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.069190 master-0 kubenswrapper[4169]: I0219 03:04:49.069086 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.069277 master-0 kubenswrapper[4169]: I0219 03:04:49.069178 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.069560 master-0 kubenswrapper[4169]: I0219 03:04:49.069535 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.071536 master-0 kubenswrapper[4169]: I0219 03:04:49.071102 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.099454 master-0 kubenswrapper[4169]: I0219 03:04:49.099324 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.100522 master-0 kubenswrapper[4169]: I0219 03:04:49.100479 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.101397 master-0 kubenswrapper[4169]: I0219 03:04:49.101362 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.103544 master-0 kubenswrapper[4169]: I0219 03:04:49.103510 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.103609 master-0 kubenswrapper[4169]: I0219 03:04:49.103537 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.103609 master-0 kubenswrapper[4169]: I0219 03:04:49.103592 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.103723 master-0 kubenswrapper[4169]: I0219 03:04:49.103659 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.103723 master-0 kubenswrapper[4169]: I0219 03:04:49.103678 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.103787 master-0 kubenswrapper[4169]: I0219 03:04:49.103742 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:49.103924 master-0 kubenswrapper[4169]: I0219 03:04:49.103890 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.104400 master-0 kubenswrapper[4169]: I0219 03:04:49.104377 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.105206 master-0 kubenswrapper[4169]: I0219 03:04:49.105172 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.168744 master-0 kubenswrapper[4169]: I0219 03:04:49.168703 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.168837 master-0 kubenswrapper[4169]: I0219 03:04:49.168751 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.168837 master-0 kubenswrapper[4169]: I0219 03:04:49.168771 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.168837 master-0 kubenswrapper[4169]: I0219 03:04:49.168788 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.168976 master-0 kubenswrapper[4169]: E0219 03:04:49.168936 4169 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:49.169036 master-0 kubenswrapper[4169]: E0219 03:04:49.169022 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.668997833 +0000 UTC m=+113.415189568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:49.169245 master-0 kubenswrapper[4169]: I0219 03:04:49.169203 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.169317 master-0 kubenswrapper[4169]: I0219 03:04:49.169271 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.169317 master-0 kubenswrapper[4169]: I0219 03:04:49.169307 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.169401 master-0 kubenswrapper[4169]: I0219 03:04:49.169319 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.169401 master-0 kubenswrapper[4169]: I0219 03:04:49.169334 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.169478 master-0 kubenswrapper[4169]: I0219 03:04:49.169406 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.169478 master-0 kubenswrapper[4169]: I0219 03:04:49.169430 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.169478 master-0 kubenswrapper[4169]: I0219 03:04:49.169458 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.169478 master-0 kubenswrapper[4169]: I0219 03:04:49.169479 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.169612 master-0 kubenswrapper[4169]: I0219 03:04:49.169501 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.169612 master-0 kubenswrapper[4169]: I0219 03:04:49.169525 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.169826 master-0 kubenswrapper[4169]: I0219 03:04:49.169773 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.170022 master-0 kubenswrapper[4169]: I0219 03:04:49.169993 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.170095 master-0 kubenswrapper[4169]: E0219 03:04:49.170052 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:49.170180 master-0 kubenswrapper[4169]: E0219 03:04:49.170162 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.670117093 +0000 UTC m=+113.416308838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:49.170228 master-0 kubenswrapper[4169]: I0219 03:04:49.170209 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.170360 master-0 kubenswrapper[4169]: I0219 03:04:49.170341 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.170429 master-0 kubenswrapper[4169]: I0219 03:04:49.170410 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.170464 master-0 kubenswrapper[4169]: I0219 03:04:49.170444 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.170516 master-0 kubenswrapper[4169]: I0219 03:04:49.170498 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.170548 master-0 kubenswrapper[4169]: I0219 03:04:49.170531 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.170620 master-0 kubenswrapper[4169]: I0219 03:04:49.170585 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.170651 master-0 kubenswrapper[4169]: I0219 03:04:49.170637 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.170686 master-0 kubenswrapper[4169]: I0219 03:04:49.170664 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:49.170716 master-0 kubenswrapper[4169]: I0219 03:04:49.170688 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.170745 master-0 kubenswrapper[4169]: I0219 03:04:49.170716 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.170745 master-0 kubenswrapper[4169]: I0219 03:04:49.170730 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170776 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: E0219 03:04:49.170785 4169 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: E0219 03:04:49.170823 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.670810201 +0000 UTC m=+113.417001936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170826 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170884 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170904 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170921 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170938 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170957 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170974 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.170999 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.171017 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.171033 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.171051 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.171182 master-0 kubenswrapper[4169]: I0219 03:04:49.171083 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.171880 master-0 kubenswrapper[4169]: I0219 03:04:49.171184 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.171880 master-0 kubenswrapper[4169]: I0219 03:04:49.171208 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.171880 master-0 kubenswrapper[4169]: I0219 03:04:49.171289 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.171880 master-0 kubenswrapper[4169]: I0219 03:04:49.171778 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.172112 master-0 kubenswrapper[4169]: I0219 03:04:49.172082 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.172327 master-0 kubenswrapper[4169]: E0219 03:04:49.172302 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:49.172379 master-0 kubenswrapper[4169]: E0219 03:04:49.172349 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.672336762 +0000 UTC m=+113.418528507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:49.172436 master-0 kubenswrapper[4169]: E0219 03:04:49.172399 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:49.172436 master-0 kubenswrapper[4169]: I0219 03:04:49.172410 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.172436 master-0 kubenswrapper[4169]: E0219 03:04:49.172426 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.672417755 +0000 UTC m=+113.418609500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:49.172842 master-0 kubenswrapper[4169]: I0219 03:04:49.172804 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.172902 master-0 kubenswrapper[4169]: I0219 03:04:49.172883 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.173068 master-0 kubenswrapper[4169]: I0219 03:04:49.173034 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.173204 master-0 kubenswrapper[4169]: I0219 03:04:49.173144 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.173358 master-0 kubenswrapper[4169]: E0219 03:04:49.173329 4169 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:49.173410 master-0 kubenswrapper[4169]: E0219 03:04:49.173390 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:49.67337117 +0000 UTC m=+113.419563125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:49.173817 master-0 kubenswrapper[4169]: I0219 03:04:49.173786 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.174024 master-0 kubenswrapper[4169]: I0219 03:04:49.173992 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.174084 master-0 kubenswrapper[4169]: I0219 03:04:49.174021 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.174513 master-0 kubenswrapper[4169]: I0219 03:04:49.174482 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.174580 master-0 kubenswrapper[4169]: I0219 03:04:49.174546 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.175203 master-0 kubenswrapper[4169]: I0219 03:04:49.175164 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.175825 master-0 kubenswrapper[4169]: I0219 03:04:49.175793 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.176036 master-0 kubenswrapper[4169]: I0219 03:04:49.176007 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.176085 master-0 kubenswrapper[4169]: I0219 03:04:49.176063 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.176940 master-0 kubenswrapper[4169]: I0219 03:04:49.176710 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.177277 master-0 kubenswrapper[4169]: I0219 03:04:49.177220 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.213301 master-0 kubenswrapper[4169]: I0219 03:04:49.210523 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.219673 master-0 kubenswrapper[4169]: I0219 03:04:49.219639 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.243656 master-0 kubenswrapper[4169]: I0219 03:04:49.243625 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.261650 master-0 kubenswrapper[4169]: I0219 03:04:49.261605 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:49.262867 master-0 kubenswrapper[4169]: I0219 03:04:49.262822 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.279872 master-0 kubenswrapper[4169]: I0219 03:04:49.279828 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:49.282162 master-0 kubenswrapper[4169]: I0219 03:04:49.282112 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.306573 master-0 kubenswrapper[4169]: I0219 03:04:49.306511 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.322070 master-0 kubenswrapper[4169]: I0219 03:04:49.322009 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.324694 master-0 kubenswrapper[4169]: I0219 03:04:49.324656 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:49.341523 master-0 kubenswrapper[4169]: I0219 03:04:49.341468 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.359103 master-0 kubenswrapper[4169]: I0219 03:04:49.359033 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:49.365850 master-0 kubenswrapper[4169]: I0219 03:04:49.365777 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.369030 master-0 kubenswrapper[4169]: I0219 03:04:49.368597 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:49.389671 master-0 kubenswrapper[4169]: I0219 03:04:49.389639 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.408758 master-0 kubenswrapper[4169]: I0219 03:04:49.408707 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.433090 master-0 kubenswrapper[4169]: I0219 03:04:49.430426 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.446161 master-0 kubenswrapper[4169]: I0219 03:04:49.443474 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:49.448004 master-0 kubenswrapper[4169]: I0219 03:04:49.447329 4169 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:49.448337 master-0 kubenswrapper[4169]: I0219 03:04:49.448308 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:49.459617 master-0 kubenswrapper[4169]: I0219 03:04:49.459573 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:49.474524 master-0 kubenswrapper[4169]: I0219 03:04:49.474464 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc"] Feb 19 03:04:49.486882 master-0 kubenswrapper[4169]: I0219 03:04:49.483942 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:49.488634 master-0 kubenswrapper[4169]: I0219 03:04:49.488593 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v"] Feb 19 03:04:49.492866 master-0 kubenswrapper[4169]: I0219 03:04:49.491349 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:49.503079 master-0 kubenswrapper[4169]: W0219 03:04:49.500913 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05c9cb4a_5249_4116_a2e5_caa7859e2075.slice/crio-40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d WatchSource:0}: Error finding container 40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d: Status 404 returned error can't find the container with id 40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d Feb 19 03:04:49.504065 master-0 kubenswrapper[4169]: I0219 03:04:49.504027 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:49.516500 master-0 kubenswrapper[4169]: I0219 03:04:49.516444 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:49.557557 master-0 kubenswrapper[4169]: I0219 03:04:49.554673 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8"] Feb 19 03:04:49.576619 master-0 kubenswrapper[4169]: I0219 03:04:49.576447 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:49.576619 master-0 kubenswrapper[4169]: I0219 03:04:49.576499 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:49.576619 master-0 kubenswrapper[4169]: I0219 03:04:49.576534 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:49.576619 master-0 kubenswrapper[4169]: I0219 03:04:49.576610 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:49.576843 master-0 kubenswrapper[4169]: I0219 03:04:49.576638 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:49.577097 master-0 kubenswrapper[4169]: E0219 03:04:49.577054 4169 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:49.577416 master-0 kubenswrapper[4169]: E0219 03:04:49.577135 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.577098327 +0000 UTC m=+114.323290062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:49.581585 master-0 kubenswrapper[4169]: E0219 03:04:49.581280 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:49.581585 master-0 kubenswrapper[4169]: E0219 03:04:49.581423 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.58133981 +0000 UTC m=+114.327531545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:49.581656 master-0 kubenswrapper[4169]: E0219 03:04:49.581571 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:49.581656 master-0 kubenswrapper[4169]: E0219 03:04:49.581625 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.581613188 +0000 UTC m=+114.327804923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:49.581743 master-0 kubenswrapper[4169]: E0219 03:04:49.581718 4169 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:49.581794 master-0 kubenswrapper[4169]: E0219 03:04:49.581768 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.581758442 +0000 UTC m=+114.327950177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:49.581853 master-0 kubenswrapper[4169]: E0219 03:04:49.581835 4169 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:49.581893 master-0 kubenswrapper[4169]: E0219 03:04:49.581867 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.581858084 +0000 UTC m=+114.328049819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686099 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686155 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686205 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686250 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686309 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: I0219 03:04:49.686371 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686486 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686531 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.686515831 +0000 UTC m=+114.432707566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686834 4169 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686861 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.68685196 +0000 UTC m=+114.433043695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686904 4169 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686928 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.686919442 +0000 UTC m=+114.433111177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686977 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.686996 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.686989324 +0000 UTC m=+114.433181059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:49.688424 master-0 kubenswrapper[4169]: E0219 03:04:49.687034 4169 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:49.689053 master-0 kubenswrapper[4169]: E0219 03:04:49.687055 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.687047385 +0000 UTC m=+114.433239120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:49.689053 master-0 kubenswrapper[4169]: E0219 03:04:49.687093 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:49.689053 master-0 kubenswrapper[4169]: E0219 03:04:49.687112 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:50.687105897 +0000 UTC m=+114.433297632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:49.697906 master-0 kubenswrapper[4169]: I0219 03:04:49.697856 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc"] Feb 19 03:04:49.723925 master-0 kubenswrapper[4169]: I0219 03:04:49.723832 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:49.739230 master-0 kubenswrapper[4169]: I0219 03:04:49.739188 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7"] Feb 19 03:04:49.751851 master-0 kubenswrapper[4169]: I0219 03:04:49.751797 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l"] Feb 19 03:04:49.762482 master-0 kubenswrapper[4169]: W0219 03:04:49.762426 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbc2f7d0_4bae_4d4a_b041_a624ec2b9333.slice/crio-a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391 WatchSource:0}: Error finding container a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391: Status 404 returned error can't find the container with id a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391 Feb 19 03:04:49.798351 master-0 kubenswrapper[4169]: I0219 03:04:49.798299 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p"] Feb 19 03:04:49.818101 master-0 kubenswrapper[4169]: I0219 03:04:49.818067 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs"] Feb 19 03:04:49.819705 master-0 kubenswrapper[4169]: I0219 03:04:49.819663 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9"] Feb 19 03:04:49.824656 master-0 kubenswrapper[4169]: W0219 03:04:49.824594 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9e07d3_d157_4948_84a6_04b8aa7eef4c.slice/crio-81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9 WatchSource:0}: Error finding container 81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9: Status 404 returned error can't find the container with id 81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9 Feb 19 03:04:49.828978 master-0 kubenswrapper[4169]: W0219 03:04:49.828864 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9d54aa_5f71_4a82_8e71_401ed3083a13.slice/crio-62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c WatchSource:0}: Error finding container 62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c: Status 404 returned error can't find the container with id 62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c Feb 19 03:04:49.885037 master-0 kubenswrapper[4169]: I0219 03:04:49.884986 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj"] Feb 19 03:04:49.891030 master-0 kubenswrapper[4169]: W0219 03:04:49.891004 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6fae256_6a2e_45e7_8f2f_d471f46ad3b2.slice/crio-bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480 WatchSource:0}: Error finding container bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480: Status 404 returned error can't find the container with id bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480 Feb 19 03:04:49.921288 master-0 kubenswrapper[4169]: I0219 03:04:49.921213 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-kvvll" event={"ID":"decd8c56-e0f0-4119-917f-56652c8f8372","Type":"ContainerStarted","Data":"7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498"} Feb 19 03:04:49.922176 master-0 kubenswrapper[4169]: I0219 03:04:49.922136 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626"} Feb 19 03:04:49.923100 master-0 kubenswrapper[4169]: I0219 03:04:49.923067 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c"} Feb 19 03:04:49.924158 master-0 kubenswrapper[4169]: I0219 03:04:49.924132 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b"} Feb 19 03:04:49.925513 master-0 kubenswrapper[4169]: I0219 03:04:49.925357 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c"} Feb 19 03:04:49.926702 master-0 kubenswrapper[4169]: I0219 03:04:49.926172 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d"} Feb 19 03:04:49.927026 master-0 kubenswrapper[4169]: I0219 03:04:49.927003 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerStarted","Data":"87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9"} Feb 19 03:04:49.928103 master-0 kubenswrapper[4169]: I0219 03:04:49.927963 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" event={"ID":"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2","Type":"ContainerStarted","Data":"bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480"} Feb 19 03:04:49.928725 master-0 kubenswrapper[4169]: I0219 03:04:49.928692 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerStarted","Data":"91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545"} Feb 19 03:04:49.929957 master-0 kubenswrapper[4169]: I0219 03:04:49.929897 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerStarted","Data":"a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391"} Feb 19 03:04:49.930813 master-0 kubenswrapper[4169]: I0219 03:04:49.930751 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerStarted","Data":"81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9"} Feb 19 03:04:50.006436 master-0 kubenswrapper[4169]: I0219 03:04:50.004686 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq"] Feb 19 03:04:50.007468 master-0 kubenswrapper[4169]: I0219 03:04:50.007422 4169 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l"] Feb 19 03:04:50.009107 master-0 kubenswrapper[4169]: W0219 03:04:50.009044 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4714ef51_2d24_4938_8c58_80c1485a368b.slice/crio-7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf WatchSource:0}: Error finding container 7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf: Status 404 returned error can't find the container with id 7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf Feb 19 03:04:50.018167 master-0 kubenswrapper[4169]: W0219 03:04:50.018100 4169 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c6dc8c_32c7_4c29_9ee8_a231d0bc2651.slice/crio-1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09 WatchSource:0}: Error finding container 1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09: Status 404 returned error can't find the container with id 1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09 Feb 19 03:04:50.226079 master-0 kubenswrapper[4169]: I0219 03:04:50.226042 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:50.226343 master-0 kubenswrapper[4169]: I0219 03:04:50.226309 4169 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:50.229677 master-0 kubenswrapper[4169]: I0219 03:04:50.228328 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 03:04:50.229677 master-0 kubenswrapper[4169]: I0219 03:04:50.228412 4169 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 03:04:50.229677 master-0 kubenswrapper[4169]: I0219 03:04:50.229226 4169 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 03:04:50.603681 master-0 kubenswrapper[4169]: I0219 03:04:50.603522 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:50.603871 master-0 kubenswrapper[4169]: I0219 03:04:50.603748 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:50.603871 master-0 kubenswrapper[4169]: E0219 03:04:50.603785 4169 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:50.603946 master-0 kubenswrapper[4169]: E0219 03:04:50.603873 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.603848423 +0000 UTC m=+116.350040328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:50.603946 master-0 kubenswrapper[4169]: E0219 03:04:50.603917 4169 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: I0219 03:04:50.603961 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.603995 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.603974636 +0000 UTC m=+116.350166371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: I0219 03:04:50.604025 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: I0219 03:04:50.604066 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604074 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604145 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.60411119 +0000 UTC m=+116.350303095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604264 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604290 4169 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604321 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.604313105 +0000 UTC m=+116.350504840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:50.607539 master-0 kubenswrapper[4169]: E0219 03:04:50.604357 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.604328906 +0000 UTC m=+116.350520641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:50.705200 master-0 kubenswrapper[4169]: I0219 03:04:50.705134 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:50.705432 master-0 kubenswrapper[4169]: I0219 03:04:50.705221 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:50.705432 master-0 kubenswrapper[4169]: I0219 03:04:50.705305 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:50.705432 master-0 kubenswrapper[4169]: I0219 03:04:50.705335 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:50.705432 master-0 kubenswrapper[4169]: I0219 03:04:50.705364 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:50.705432 master-0 kubenswrapper[4169]: I0219 03:04:50.705395 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:50.705606 master-0 kubenswrapper[4169]: E0219 03:04:50.705539 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:50.705606 master-0 kubenswrapper[4169]: E0219 03:04:50.705597 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.705578971 +0000 UTC m=+116.451770706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:50.706018 master-0 kubenswrapper[4169]: E0219 03:04:50.705998 4169 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:50.706065 master-0 kubenswrapper[4169]: E0219 03:04:50.706034 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.706024633 +0000 UTC m=+116.452216368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:50.706115 master-0 kubenswrapper[4169]: E0219 03:04:50.706080 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:50.706115 master-0 kubenswrapper[4169]: E0219 03:04:50.706108 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.706099375 +0000 UTC m=+116.452291110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:50.706191 master-0 kubenswrapper[4169]: E0219 03:04:50.706153 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:50.706191 master-0 kubenswrapper[4169]: E0219 03:04:50.706179 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.706171667 +0000 UTC m=+116.452363402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:50.706615 master-0 kubenswrapper[4169]: E0219 03:04:50.706428 4169 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:50.706669 master-0 kubenswrapper[4169]: E0219 03:04:50.706652 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.7066409 +0000 UTC m=+116.452832635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:50.706751 master-0 kubenswrapper[4169]: E0219 03:04:50.706731 4169 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:50.706806 master-0 kubenswrapper[4169]: E0219 03:04:50.706766 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:52.706756863 +0000 UTC m=+116.452948598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:50.940688 master-0 kubenswrapper[4169]: I0219 03:04:50.940637 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" event={"ID":"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651","Type":"ContainerStarted","Data":"1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09"} Feb 19 03:04:50.941796 master-0 kubenswrapper[4169]: I0219 03:04:50.941774 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb"} Feb 19 03:04:50.941796 master-0 kubenswrapper[4169]: I0219 03:04:50.941797 4169 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf"} Feb 19 03:04:50.980286 master-0 kubenswrapper[4169]: I0219 03:04:50.970719 4169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" podStartSLOduration=77.970701981 podStartE2EDuration="1m17.970701981s" podCreationTimestamp="2026-02-19 03:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:04:50.970116576 +0000 UTC m=+114.716308331" watchObservedRunningTime="2026-02-19 03:04:50.970701981 +0000 UTC m=+114.716893716" Feb 19 03:04:52.631527 master-0 kubenswrapper[4169]: I0219 03:04:52.631074 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:52.631527 master-0 kubenswrapper[4169]: I0219 03:04:52.631463 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:52.631527 master-0 kubenswrapper[4169]: I0219 03:04:52.631498 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:52.631527 master-0 kubenswrapper[4169]: E0219 03:04:52.631292 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631593 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.631573894 +0000 UTC m=+120.377765629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631683 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: I0219 03:04:52.631754 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631784 4169 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: I0219 03:04:52.631793 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631822 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.63180443 +0000 UTC m=+120.377996165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631875 4169 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631909 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.631898413 +0000 UTC m=+120.378090438 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631914 4169 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631932 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.631923034 +0000 UTC m=+120.378115039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:52.633106 master-0 kubenswrapper[4169]: E0219 03:04:52.631950 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.631941944 +0000 UTC m=+120.378133959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:52.733023 master-0 kubenswrapper[4169]: I0219 03:04:52.732938 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: I0219 03:04:52.733036 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: I0219 03:04:52.733114 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: E0219 03:04:52.733183 4169 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: E0219 03:04:52.733298 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733272872 +0000 UTC m=+120.479464617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: E0219 03:04:52.733350 4169 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:52.733406 master-0 kubenswrapper[4169]: E0219 03:04:52.733394 4169 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:52.733658 master-0 kubenswrapper[4169]: E0219 03:04:52.733446 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733421126 +0000 UTC m=+120.479613101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:52.733658 master-0 kubenswrapper[4169]: I0219 03:04:52.733434 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:52.733658 master-0 kubenswrapper[4169]: E0219 03:04:52.733482 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733458147 +0000 UTC m=+120.479649912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:52.733658 master-0 kubenswrapper[4169]: E0219 03:04:52.733570 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:52.733658 master-0 kubenswrapper[4169]: I0219 03:04:52.733613 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:52.733863 master-0 kubenswrapper[4169]: E0219 03:04:52.733658 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733630541 +0000 UTC m=+120.479822456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:52.733863 master-0 kubenswrapper[4169]: I0219 03:04:52.733726 4169 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:52.733863 master-0 kubenswrapper[4169]: E0219 03:04:52.733741 4169 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:52.733984 master-0 kubenswrapper[4169]: E0219 03:04:52.733881 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733857617 +0000 UTC m=+120.480049542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:52.733984 master-0 kubenswrapper[4169]: E0219 03:04:52.733905 4169 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:52.734070 master-0 kubenswrapper[4169]: E0219 03:04:52.734014 4169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.733984641 +0000 UTC m=+120.480176416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:53.543123 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 19 03:04:53.582162 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 19 03:04:53.582449 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 19 03:04:53.584293 master-0 systemd[1]: kubelet.service: Consumed 8.780s CPU time. Feb 19 03:04:53.604771 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 19 03:04:53.716291 master-0 kubenswrapper[7776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:04:53.716291 master-0 kubenswrapper[7776]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 03:04:53.716291 master-0 kubenswrapper[7776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:04:53.716883 master-0 kubenswrapper[7776]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:04:53.716883 master-0 kubenswrapper[7776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 03:04:53.716883 master-0 kubenswrapper[7776]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:04:53.716883 master-0 kubenswrapper[7776]: I0219 03:04:53.716470 7776 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 03:04:53.720797 master-0 kubenswrapper[7776]: W0219 03:04:53.720758 7776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:04:53.720797 master-0 kubenswrapper[7776]: W0219 03:04:53.720788 7776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:04:53.720797 master-0 kubenswrapper[7776]: W0219 03:04:53.720798 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720807 7776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720815 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720823 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720830 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720837 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720843 7776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720864 7776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720871 7776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720878 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720888 7776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720897 7776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720905 7776 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:04:53.720901 master-0 kubenswrapper[7776]: W0219 03:04:53.720913 7776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720922 7776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720932 7776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720940 7776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720948 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720956 7776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720963 7776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720971 7776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720978 7776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720985 7776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720991 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.720998 7776 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721004 7776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721022 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721029 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721035 7776 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721042 7776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721048 7776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721056 7776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:04:53.721211 master-0 kubenswrapper[7776]: W0219 03:04:53.721062 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721068 7776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721077 7776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721084 7776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721091 7776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721097 7776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721104 7776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721110 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721116 7776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721124 7776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721130 7776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721137 7776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721143 7776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721150 7776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721156 7776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721165 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721172 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721178 7776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721184 7776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721191 7776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:04:53.721728 master-0 kubenswrapper[7776]: W0219 03:04:53.721198 7776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721207 7776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721218 7776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721225 7776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721231 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721239 7776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721245 7776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721274 7776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721282 7776 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721288 7776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721306 7776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721313 7776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721319 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721329 7776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721337 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721346 7776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721355 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: W0219 03:04:53.721363 7776 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: I0219 03:04:53.721526 7776 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: I0219 03:04:53.721542 7776 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: I0219 03:04:53.721577 7776 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 03:04:53.722331 master-0 kubenswrapper[7776]: I0219 03:04:53.721587 7776 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721596 7776 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721604 7776 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721614 7776 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721623 7776 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721632 7776 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721641 7776 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721649 7776 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721657 7776 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721665 7776 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721673 7776 flags.go:64] FLAG: --cgroup-root="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721680 7776 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721687 7776 flags.go:64] FLAG: --client-ca-file="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721695 7776 flags.go:64] FLAG: --cloud-config="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721702 7776 flags.go:64] FLAG: --cloud-provider="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721710 7776 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721727 7776 flags.go:64] FLAG: --cluster-domain="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721735 7776 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721744 7776 flags.go:64] FLAG: --config-dir="" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721751 7776 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721760 7776 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721770 7776 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721778 7776 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721786 7776 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 03:04:53.722857 master-0 kubenswrapper[7776]: I0219 03:04:53.721794 7776 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721812 7776 flags.go:64] FLAG: --contention-profiling="false" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721821 7776 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721830 7776 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721837 7776 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721846 7776 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721856 7776 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721864 7776 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721871 7776 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721879 7776 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721886 7776 flags.go:64] FLAG: --enable-server="true" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721894 7776 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721908 7776 flags.go:64] FLAG: --event-burst="100" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721915 7776 flags.go:64] FLAG: --event-qps="50" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721922 7776 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721929 7776 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721937 7776 flags.go:64] FLAG: --eviction-hard="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721947 7776 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721954 7776 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721962 7776 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721970 7776 flags.go:64] FLAG: --eviction-soft="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721977 7776 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721984 7776 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721992 7776 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.721999 7776 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 03:04:53.723420 master-0 kubenswrapper[7776]: I0219 03:04:53.722006 7776 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722012 7776 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722019 7776 flags.go:64] FLAG: --feature-gates="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722028 7776 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722035 7776 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722043 7776 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722050 7776 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722057 7776 flags.go:64] FLAG: --healthz-port="10248" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722065 7776 flags.go:64] FLAG: --help="false" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722073 7776 flags.go:64] FLAG: --hostname-override="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722081 7776 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722089 7776 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722113 7776 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722122 7776 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722130 7776 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722137 7776 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722146 7776 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722153 7776 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722161 7776 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722170 7776 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722178 7776 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722185 7776 flags.go:64] FLAG: --kube-reserved="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722193 7776 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722200 7776 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722207 7776 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722215 7776 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 03:04:53.724065 master-0 kubenswrapper[7776]: I0219 03:04:53.722223 7776 flags.go:64] FLAG: --lock-file="" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722230 7776 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722237 7776 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722245 7776 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722282 7776 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722291 7776 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722298 7776 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722305 7776 flags.go:64] FLAG: --logging-format="text" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722313 7776 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722321 7776 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722328 7776 flags.go:64] FLAG: --manifest-url="" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722336 7776 flags.go:64] FLAG: --manifest-url-header="" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722346 7776 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722355 7776 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722364 7776 flags.go:64] FLAG: --max-pods="110" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722373 7776 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722381 7776 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722389 7776 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722396 7776 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722405 7776 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722412 7776 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722419 7776 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722462 7776 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722470 7776 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 03:04:53.724765 master-0 kubenswrapper[7776]: I0219 03:04:53.722479 7776 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722486 7776 flags.go:64] FLAG: --pod-cidr="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722494 7776 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722506 7776 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722513 7776 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722522 7776 flags.go:64] FLAG: --pods-per-core="0" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722529 7776 flags.go:64] FLAG: --port="10250" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722537 7776 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722545 7776 flags.go:64] FLAG: --provider-id="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722555 7776 flags.go:64] FLAG: --qos-reserved="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722563 7776 flags.go:64] FLAG: --read-only-port="10255" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722571 7776 flags.go:64] FLAG: --register-node="true" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722579 7776 flags.go:64] FLAG: --register-schedulable="true" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722586 7776 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722600 7776 flags.go:64] FLAG: --registry-burst="10" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722607 7776 flags.go:64] FLAG: --registry-qps="5" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722615 7776 flags.go:64] FLAG: --reserved-cpus="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722622 7776 flags.go:64] FLAG: --reserved-memory="" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722632 7776 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722639 7776 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722658 7776 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722665 7776 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722672 7776 flags.go:64] FLAG: --runonce="false" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722680 7776 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722687 7776 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 03:04:53.725597 master-0 kubenswrapper[7776]: I0219 03:04:53.722696 7776 flags.go:64] FLAG: --seccomp-default="false" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722703 7776 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722710 7776 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722718 7776 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722725 7776 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722733 7776 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722741 7776 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722748 7776 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722756 7776 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722771 7776 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722779 7776 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722787 7776 flags.go:64] FLAG: --system-cgroups="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722795 7776 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722808 7776 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722815 7776 flags.go:64] FLAG: --tls-cert-file="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722831 7776 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722851 7776 flags.go:64] FLAG: --tls-min-version="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722858 7776 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722866 7776 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722874 7776 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722882 7776 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722890 7776 flags.go:64] FLAG: --v="2" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722900 7776 flags.go:64] FLAG: --version="false" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722910 7776 flags.go:64] FLAG: --vmodule="" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722919 7776 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 03:04:53.726161 master-0 kubenswrapper[7776]: I0219 03:04:53.722927 7776 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723177 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723188 7776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723195 7776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723202 7776 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723208 7776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723215 7776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723222 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723228 7776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723236 7776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723242 7776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723249 7776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723279 7776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723286 7776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723292 7776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723302 7776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723310 7776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723318 7776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723325 7776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723332 7776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:04:53.726876 master-0 kubenswrapper[7776]: W0219 03:04:53.723349 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723356 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723368 7776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723379 7776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723385 7776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723392 7776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723398 7776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723405 7776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723411 7776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723421 7776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723429 7776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723436 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723443 7776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723451 7776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723458 7776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723465 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723471 7776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723480 7776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723488 7776 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723496 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:04:53.727604 master-0 kubenswrapper[7776]: W0219 03:04:53.723504 7776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723511 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723518 7776 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723525 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723532 7776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723538 7776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723545 7776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723551 7776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723557 7776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723564 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723571 7776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723578 7776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723584 7776 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723590 7776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723600 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723610 7776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723626 7776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723633 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723641 7776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:04:53.728091 master-0 kubenswrapper[7776]: W0219 03:04:53.723647 7776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723654 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723660 7776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723667 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723673 7776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723680 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723687 7776 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723693 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723700 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723707 7776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723714 7776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723795 7776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723803 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: W0219 03:04:53.723813 7776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:04:53.728667 master-0 kubenswrapper[7776]: I0219 03:04:53.723826 7776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:04:53.735055 master-0 kubenswrapper[7776]: I0219 03:04:53.735013 7776 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 19 03:04:53.735055 master-0 kubenswrapper[7776]: I0219 03:04:53.735047 7776 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 03:04:53.735156 master-0 kubenswrapper[7776]: W0219 03:04:53.735144 7776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:04:53.735156 master-0 kubenswrapper[7776]: W0219 03:04:53.735152 7776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:04:53.735156 master-0 kubenswrapper[7776]: W0219 03:04:53.735157 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735162 7776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735166 7776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735170 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735174 7776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735177 7776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735181 7776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735185 7776 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735188 7776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735192 7776 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735196 7776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735200 7776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735204 7776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735208 7776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735212 7776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735215 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735220 7776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735224 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735229 7776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:04:53.735235 master-0 kubenswrapper[7776]: W0219 03:04:53.735233 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735240 7776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735249 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735275 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735282 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735287 7776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735292 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735297 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735305 7776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735310 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735314 7776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735319 7776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735324 7776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735329 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735334 7776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735338 7776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735343 7776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735348 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735352 7776 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:04:53.735764 master-0 kubenswrapper[7776]: W0219 03:04:53.735355 7776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735360 7776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735364 7776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735368 7776 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735372 7776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735376 7776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735379 7776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735383 7776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735387 7776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735391 7776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735395 7776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735399 7776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735404 7776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735408 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735412 7776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735417 7776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735421 7776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735425 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735429 7776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:04:53.736322 master-0 kubenswrapper[7776]: W0219 03:04:53.735433 7776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735437 7776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735441 7776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735445 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735449 7776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735454 7776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735458 7776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735462 7776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735467 7776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735471 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735475 7776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735478 7776 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735482 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: I0219 03:04:53.735489 7776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735610 7776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:04:53.736766 master-0 kubenswrapper[7776]: W0219 03:04:53.735618 7776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735623 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735628 7776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735633 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735637 7776 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735642 7776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735647 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735654 7776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735659 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735663 7776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735668 7776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735673 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735677 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735684 7776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735689 7776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735693 7776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735698 7776 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735703 7776 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735708 7776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:04:53.737124 master-0 kubenswrapper[7776]: W0219 03:04:53.735712 7776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735718 7776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735724 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735728 7776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735733 7776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735737 7776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735741 7776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735745 7776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735749 7776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735753 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735756 7776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735760 7776 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735764 7776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735768 7776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735772 7776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735776 7776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735781 7776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735786 7776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735792 7776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:04:53.737575 master-0 kubenswrapper[7776]: W0219 03:04:53.735797 7776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735801 7776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735805 7776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735811 7776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735816 7776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735821 7776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735826 7776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735830 7776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735835 7776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735840 7776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735845 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735850 7776 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735855 7776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735860 7776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735864 7776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735869 7776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735874 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735879 7776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735884 7776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735889 7776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:04:53.738016 master-0 kubenswrapper[7776]: W0219 03:04:53.735894 7776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735899 7776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735904 7776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735909 7776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735915 7776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735921 7776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735926 7776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735932 7776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735938 7776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735943 7776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735948 7776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735953 7776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: W0219 03:04:53.735960 7776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: I0219 03:04:53.735968 7776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:04:53.738770 master-0 kubenswrapper[7776]: I0219 03:04:53.736238 7776 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738011 7776 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738109 7776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738390 7776 server.go:997] "Starting client certificate rotation" Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738401 7776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738628 7776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 21:31:04.467175587 +0000 UTC Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.738773 7776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h26m10.72840566s for next certificate rotation Feb 19 03:04:53.739193 master-0 kubenswrapper[7776]: I0219 03:04:53.739041 7776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:04:53.741719 master-0 kubenswrapper[7776]: I0219 03:04:53.741433 7776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:04:53.745565 master-0 kubenswrapper[7776]: I0219 03:04:53.745532 7776 log.go:25] "Validated CRI v1 runtime API" Feb 19 03:04:53.748346 master-0 kubenswrapper[7776]: I0219 03:04:53.748300 7776 log.go:25] "Validated CRI v1 image API" Feb 19 03:04:53.749303 master-0 kubenswrapper[7776]: I0219 03:04:53.749283 7776 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 03:04:53.753291 master-0 kubenswrapper[7776]: I0219 03:04:53.753240 7776 fs.go:135] Filesystem UUIDs: map[4837cee5-4017-4a37-b994-9fb38a99ee26:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 19 03:04:53.753594 master-0 kubenswrapper[7776]: I0219 03:04:53.753284 7776 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm major:0 minor:141 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13103220887a41b425edd349c524421eaa06bddd41c4d0276cf0be744cde8eaf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13103220887a41b425edd349c524421eaa06bddd41c4d0276cf0be744cde8eaf/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm major:0 minor:140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm major:0 minor:298 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm major:0 minor:285 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm major:0 minor:120 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm major:0 minor:164 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf:{mountpoint:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x:{mountpoint:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k:{mountpoint:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk:{mountpoint:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx:{mountpoint:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth:{mountpoint:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv:{mountpoint:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq:{mountpoint:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd:{mountpoint:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd major:0 minor:133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8:{mountpoint:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt:{mountpoint:/var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj:{mountpoint:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/947faa21-7f67-4c7e-abb0-443432f38961/volumes/kubernetes.io~projected/kube-api-access-jl7k7:{mountpoint:/var/lib/kubelet/pods/947faa21-7f67-4c7e-abb0-443432f38961/volumes/kubernetes.io~projected/kube-api-access-jl7k7 major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v:{mountpoint:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx:{mountpoint:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m:{mountpoint:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k:{mountpoint:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css:{mountpoint:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq:{mountpoint:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae/volumes/kubernetes.io~projected/kube-api-access major:0 minor:67 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm:{mountpoint:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz:{mountpoint:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz major:0 minor:66 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp:{mountpoint:/var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd:{mountpoint:/var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5:{mountpoint:/var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5 major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd:{mountpoint:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert major:0 minor:258 fsType:tmpfs blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/d51e405c52fda80fa839e713ee4a506d190140436239aea63864a21585834dfa/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/333695d44fb4dea66d3838323b6bae6e6e7cb9b63c79baabfc468291ab337fbc/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/39f15e219217072d46d32ccf193d5f7467207b291f23b4a41ac47ea1d4b5c8ab/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/5c14756094a9fcd34e518f5182be622d4358fafe9a27c0c9212fa7b950cc98cb/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/53855fca11fd3abad9f292ed7f84427e4a61ad28619ebf18b0e503478504f862/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/a22bb754ad083376920d52e7cfb71c6523cd50760666b877e7d8e5b609e766e4/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/8bbcf8e9747e07601fac3a0d8577b6ed7d47292febe7713bd79539434f4ced4b/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/424f116489c83431edbedcfeb227c73c73f9e0d1802e9d31a8b70525073f031b/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/0161c8486bcf80360be5c9bef902213dee26f63bb4b1282030a2a34f3f103d1b/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/f39a2df8a371a21d08fe8e36c1a250ab97280e7f79f00dd6e561cb756a113f1d/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/a8f50e24c5a448a8e996ea5fe9835b10b5c751113d6d989006c34d69654f08ac/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/3744f306e8217d38c58f8e2c6b3ad9d021ed687465465de8b9a91964a44c3f4f/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/c3c39c03fad679de58eea0e8e9004a2b9c6993349b0f794fab42c634ec7b031a/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/27393a1f62b351a1716f03ea8e1d5489d5660cc4ca9510dfbcdd0f7696168cc2/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/1c7c0186b40fd534d46822d59bc963d3d262811bd124127e46e699ade23a213f/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/e57148e467ed26266d6e7a03aec4f08b79edb6f36460be6207eb0a1d66b7147d/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/e20a4ceecaafbc2c52109b905036c56a05efcf38eb7048fd1d3d59469cf849ff/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/dd7fa5f8104a8ade24da4f55de24f51b7ce145b31487caabf9a5f541b5dbe866/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/f681174d6681a67e5a0b72d0af25ed35ea7c304bc0a57a77bf6003fe3ab5ed1a/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/547b0a722317c5a7ebb20b72765471dbd064e60fafc4ed8df70f4cc1cbddaba8/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/57fd9daa22bccce96d15e2d8a7c6c647d29d1745672b159b7ac21fbb4bf6ce06/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/f67cca23c24a684ea473fe7bad1dd1dbe8cad4793bf76cddde6dee1e2e221122/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/5c59acc8bc36ed95743d4a9fc0f8eae2eef13225dadf9161b8edae6f1beea5ae/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/e1d0ba90d3cbe5db051ffa4140b4e4ff8d72842942664b936bae4c040ee62bd9/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/08632b9b2de39cb0d6c6d5b04de38fafe7d2d85af0cb5514c6f81162ab7622ba/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/802c6b1270fd0d0d60536752380f155d53c0e5dd99196b11bc876e825ed1bc94/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/71a9428dfe3ed20faf3ce8680ddddc859960c7c2da5ec527406c069288dfba89/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-209:{mountpoint:/var/lib/containers/storage/overlay/fb0540f32ec7a8e62b5b595cca457f43ab2564f41670b29fc397d9be259f27c4/merged major:0 minor:209 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/b2962973fe30936b48678cd2cf74ef628bd2ad129c1cc495776d11e7b76874e2/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/2fc2f733de9ee3188f11cec5d0cab74b23e4b8624a941de7bb6a55f6ffc98949/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/1eec9bf698a1e4dc4171735de620ca5080aeff80cae15d022cfcb06364b45d42/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/7cce98422b9a31b3b93e51923970dd311e508c79195482d50db6fb13d13dc3c1/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/384c79a742d91789f396741662f60f1579fa2580a59900eb0911fe0dd9b5b443/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/a594d7c5b03b8a24089c94896c3c19d5d26e4f949089abc07229d61a031d22bd/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/27ca8881c84efa411cb0045c4a948e7cb4b319a2ac5acda856536f8573a60114/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/2afd8eda6f14788fe4612f60ad4b8ddcbc91131bde772d26dd81cb56b8196574/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/8bacb1e59aec5e75fd655f6c8009faf3ba6c76de8e00232981402235f1d9e933/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/87e5a674a40721deca15b380102ef1e2b44694f94991bbcbfd6d0a841f7fb957/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/c99743a0ebcab41173f178800f0e8e3031df46fe499698b6eefb9e8aff1349a0/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/0ca12716289d5b12ebfea77d3accd1e123bda28c8e9ff3280b1b56aca13a67df/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/93acb8635a6310c14288290ad109f1a41cc9c151eec738b22ffbb9bfe12dcb09/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/9e94a49f76447bcb7c379f9d946434bc34ffce1666acf18bae6bda545e3cdb2a/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/b941aa6c3f10a913e54aa4ea12b57b60a69b84653d6eb1d9897a0221c63ea3d4/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/73b9c492423db7f25e4b2a7d59aaa8758ac02e81c589cc8a3c115425fedc0646/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/ae010d3e9378989eeb7c0bfc34234559b92ebac78ffd382089c5a1f046a02e8d/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/883c1be74bbcc214ea80340c32d0f6a9e07ca07ced318966d9fdc3679b84688f/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/6f271a1e8bab58cca67510f38cbf099939de4a3e1094ebcd486abdf7f89aba17/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/f0110e70ef66341a06b7ac5c668786439f141722d13f25b6a1c795b7d443c288/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/6712f806bf0c5deed4950cc61890848e9abb6e56d4debe8289e3ce5aaee36470/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/41466bab5b4b028d35d92d7bb27b9957a86abd046ca29e3fec626725f3a83b84/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/d8f9c2cf5f633ced78931b09a42698f6f7c3526d67202ceaba78ffc67105edf8/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/4ced844ed5d9ddfd2a193b5c15f8b73825fa6e0ffc56abcba259f97f153281f4/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/871c65f058ad7c0d1cb950c2e2a1204082a195b3d71c77f4baa77809935b4595/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/4a06bafbcaa8f22d92b82bd52f0d75b6ed22483b7c9018ac9c89c8634445ae00/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/97c356120e47d6a05edb1c8b7be4fe06d50f52a38632174fd8913920fe3286fc/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/336cb287d6fb96d91a9db08f76dd5e4f85ed8029633271de53e542ab819e68a0/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/f7e545f5328a873abd403602631a98d19cf79b56a31a8fcb1f125c5756145679/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/15c145ebe34a68b7c1ceac53d37180976d617660ec4b1636fd2bec6c6a012f5a/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/40d58cf6aff137f45a529b21b5c43df1e269773340daa792b10230b9770f5d18/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/37c7fd2e7eb33762e9003752554ff97da37c30f8650e34494ce0121b99169feb/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 19 03:04:53.778880 master-0 kubenswrapper[7776]: I0219 03:04:53.778344 7776 manager.go:217] Machine: {Timestamp:2026-02-19 03:04:53.777195438 +0000 UTC m=+0.116879976 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4d28ab4c6c14d45b3b826d1d7d6a246 SystemUUID:e4d28ab4-c6c1-4d45-b3b8-26d1d7d6a246 BootID:81756ef7-a125-45a3-9659-4adc79f47dc0 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm DeviceMajor:0 DeviceMinor:164 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm DeviceMajor:0 DeviceMinor:285 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5 DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd DeviceMajor:0 DeviceMinor:133 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm DeviceMajor:0 DeviceMinor:298 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm DeviceMajor:0 DeviceMinor:141 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm DeviceMajor:0 DeviceMinor:140 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:67 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/947faa21-7f67-4c7e-abb0-443432f38961/volumes/kubernetes.io~projected/kube-api-access-jl7k7 DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq DeviceMajor:0 DeviceMinor:283 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz DeviceMajor:0 DeviceMinor:66 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:238 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd DeviceMajor:0 DeviceMinor:284 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13103220887a41b425edd349c524421eaa06bddd41c4d0276cf0be744cde8eaf/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm DeviceMajor:0 DeviceMinor:120 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45 DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-209 DeviceMajor:0 DeviceMinor:209 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1661a18dd333409 MacAddress:aa:0c:6b:07:20:fe Speed:10000 Mtu:8900} {Name:1bf12b7aaff989d MacAddress:52:00:bb:23:e4:69 Speed:10000 Mtu:8900} {Name:40c5200e9b9335d MacAddress:ca:ed:fc:15:d7:01 Speed:10000 Mtu:8900} {Name:62011c22e1ac970 MacAddress:72:aa:fa:ed:5b:3e Speed:10000 Mtu:8900} {Name:7201246ec91870a MacAddress:6e:eb:94:de:7e:1d Speed:10000 Mtu:8900} {Name:81ed4699f10fea3 MacAddress:b2:9b:5f:0b:ff:55 Speed:10000 Mtu:8900} {Name:87e7bba244435f8 MacAddress:c6:b7:00:98:51:84 Speed:10000 Mtu:8900} {Name:8fedd22b9da118b MacAddress:e2:ac:3e:a8:51:38 Speed:10000 Mtu:8900} {Name:91f1c7bcd88e0a3 MacAddress:1e:bb:d7:f9:6c:47 Speed:10000 Mtu:8900} {Name:a28c1fb386c9688 MacAddress:32:6f:89:6a:a8:eb Speed:10000 Mtu:8900} {Name:bfb8eb142f502ea MacAddress:ee:42:e1:fc:3a:4e Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7e:a3:96:7e:42:f6 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:80:8b:c0 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bd:d1:82 Speed:-1 Mtu:9000} {Name:f366572292d05f4 MacAddress:02:38:28:0e:56:28 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:bd:f6:a4:63:b0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 03:04:53.779298 master-0 kubenswrapper[7776]: I0219 03:04:53.779282 7776 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 03:04:53.779538 master-0 kubenswrapper[7776]: I0219 03:04:53.779523 7776 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 03:04:53.779921 master-0 kubenswrapper[7776]: I0219 03:04:53.779906 7776 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 03:04:53.780204 master-0 kubenswrapper[7776]: I0219 03:04:53.780168 7776 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 03:04:53.780572 master-0 kubenswrapper[7776]: I0219 03:04:53.780380 7776 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 03:04:53.780693 master-0 kubenswrapper[7776]: I0219 03:04:53.780683 7776 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 03:04:53.780747 master-0 kubenswrapper[7776]: I0219 03:04:53.780739 7776 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 03:04:53.780801 master-0 kubenswrapper[7776]: I0219 03:04:53.780792 7776 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:04:53.780866 master-0 kubenswrapper[7776]: I0219 03:04:53.780856 7776 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:04:53.781062 master-0 kubenswrapper[7776]: I0219 03:04:53.781052 7776 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:04:53.781192 master-0 kubenswrapper[7776]: I0219 03:04:53.781183 7776 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 03:04:53.781328 master-0 kubenswrapper[7776]: I0219 03:04:53.781317 7776 kubelet.go:418] "Attempting to sync node with API server" Feb 19 03:04:53.782466 master-0 kubenswrapper[7776]: I0219 03:04:53.782455 7776 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 03:04:53.782541 master-0 kubenswrapper[7776]: I0219 03:04:53.782531 7776 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 03:04:53.782608 master-0 kubenswrapper[7776]: I0219 03:04:53.782599 7776 kubelet.go:324] "Adding apiserver pod source" Feb 19 03:04:53.782663 master-0 kubenswrapper[7776]: I0219 03:04:53.782653 7776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 03:04:53.784014 master-0 kubenswrapper[7776]: I0219 03:04:53.783990 7776 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 19 03:04:53.784381 master-0 kubenswrapper[7776]: I0219 03:04:53.784366 7776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 03:04:53.784711 master-0 kubenswrapper[7776]: I0219 03:04:53.784700 7776 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 03:04:53.784884 master-0 kubenswrapper[7776]: I0219 03:04:53.784865 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 03:04:53.784975 master-0 kubenswrapper[7776]: I0219 03:04:53.784963 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 03:04:53.785059 master-0 kubenswrapper[7776]: I0219 03:04:53.785048 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 03:04:53.785132 master-0 kubenswrapper[7776]: I0219 03:04:53.785120 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 03:04:53.785214 master-0 kubenswrapper[7776]: I0219 03:04:53.785202 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 03:04:53.785318 master-0 kubenswrapper[7776]: I0219 03:04:53.785306 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 03:04:53.785400 master-0 kubenswrapper[7776]: I0219 03:04:53.785389 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 03:04:53.785480 master-0 kubenswrapper[7776]: I0219 03:04:53.785468 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 03:04:53.785557 master-0 kubenswrapper[7776]: I0219 03:04:53.785544 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 03:04:53.785627 master-0 kubenswrapper[7776]: I0219 03:04:53.785615 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 03:04:53.785700 master-0 kubenswrapper[7776]: I0219 03:04:53.785689 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 03:04:53.785857 master-0 kubenswrapper[7776]: I0219 03:04:53.785845 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 03:04:53.785936 master-0 kubenswrapper[7776]: I0219 03:04:53.785926 7776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 03:04:53.786333 master-0 kubenswrapper[7776]: I0219 03:04:53.786321 7776 server.go:1280] "Started kubelet" Feb 19 03:04:53.787445 master-0 kubenswrapper[7776]: I0219 03:04:53.787393 7776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 03:04:53.787541 master-0 kubenswrapper[7776]: I0219 03:04:53.787527 7776 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 19 03:04:53.787682 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 19 03:04:53.788132 master-0 kubenswrapper[7776]: I0219 03:04:53.788119 7776 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 03:04:53.788226 master-0 kubenswrapper[7776]: I0219 03:04:53.787392 7776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 03:04:53.789657 master-0 kubenswrapper[7776]: I0219 03:04:53.789646 7776 server.go:449] "Adding debug handlers to kubelet server" Feb 19 03:04:53.793637 master-0 kubenswrapper[7776]: I0219 03:04:53.792611 7776 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 03:04:53.796444 master-0 kubenswrapper[7776]: I0219 03:04:53.796400 7776 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 03:04:53.797247 master-0 kubenswrapper[7776]: I0219 03:04:53.797220 7776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 03:04:53.797247 master-0 kubenswrapper[7776]: I0219 03:04:53.797268 7776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 03:04:53.797396 master-0 kubenswrapper[7776]: I0219 03:04:53.797297 7776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 23:28:45.43314099 +0000 UTC Feb 19 03:04:53.797396 master-0 kubenswrapper[7776]: I0219 03:04:53.797348 7776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h23m51.635796832s for next certificate rotation Feb 19 03:04:53.797982 master-0 kubenswrapper[7776]: I0219 03:04:53.797481 7776 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 03:04:53.797982 master-0 kubenswrapper[7776]: I0219 03:04:53.797502 7776 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 03:04:53.797982 master-0 kubenswrapper[7776]: I0219 03:04:53.797539 7776 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 19 03:04:53.801534 master-0 kubenswrapper[7776]: I0219 03:04:53.801470 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config" seLinuxMountContext="" Feb 19 03:04:53.801607 master-0 kubenswrapper[7776]: I0219 03:04:53.801538 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca" seLinuxMountContext="" Feb 19 03:04:53.801607 master-0 kubenswrapper[7776]: I0219 03:04:53.801557 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx" seLinuxMountContext="" Feb 19 03:04:53.801607 master-0 kubenswrapper[7776]: I0219 03:04:53.801570 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45" seLinuxMountContext="" Feb 19 03:04:53.801607 master-0 kubenswrapper[7776]: I0219 03:04:53.801584 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b283bd8e-3339-4701-ae3c-f009e498b7d4" volumeName="kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert" seLinuxMountContext="" Feb 19 03:04:53.801607 master-0 kubenswrapper[7776]: I0219 03:04:53.801598 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801612 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801627 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58c6f5a2-c0a8-4636-a057-cedbe0151579" volumeName="kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801643 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801656 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c50a2aec-7ed0-4114-8b25-19579fe931cb" volumeName="kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801670 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801682 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801696 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801713 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="decd8c56-e0f0-4119-917f-56652c8f8372" volumeName="kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801727 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x" seLinuxMountContext="" Feb 19 03:04:53.801734 master-0 kubenswrapper[7776]: I0219 03:04:53.801739 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801752 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801779 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58c6f5a2-c0a8-4636-a057-cedbe0151579" volumeName="kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801793 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80c48134-cb22-4cf9-b076-ce39af2f4113" volumeName="kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801807 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801843 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801858 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801872 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801883 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801897 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801910 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="decd8c56-e0f0-4119-917f-56652c8f8372" volumeName="kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801925 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801938 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801950 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801962 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801976 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.801990 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.802009 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.802023 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.802037 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.802050 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 03:04:53.802048 master-0 kubenswrapper[7776]: I0219 03:04:53.802065 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" volumeName="kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802078 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802098 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802111 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802123 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802137 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802152 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802165 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802178 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802190 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802203 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802218 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802232 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802268 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802283 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b283bd8e-3339-4701-ae3c-f009e498b7d4" volumeName="kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802296 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c50a2aec-7ed0-4114-8b25-19579fe931cb" volumeName="kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802314 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802330 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80c48134-cb22-4cf9-b076-ce39af2f4113" volumeName="kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802347 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802364 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98ac5423-b231-44e5-9545-424d635ed6ee" volumeName="kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802382 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802396 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802409 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" volumeName="kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802422 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802434 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802449 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ae2cbe0-aa0a-4f26-994b-660fb962d995" volumeName="kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802462 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802474 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802485 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802498 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802510 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" volumeName="kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802524 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802537 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6fae256-6a2e-45e7-8f2f-d471f46ad3b2" volumeName="kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802551 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802563 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802575 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802588 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802601 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802615 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802628 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802641 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802654 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802667 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="947faa21-7f67-4c7e-abb0-443432f38961" volumeName="kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802680 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802694 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802708 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67f4e002-26fb-41e3-abdb-f4928b6c561f" volumeName="kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802720 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802733 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802746 7776 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" volumeName="kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access" seLinuxMountContext="" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802766 7776 reconstruct.go:97] "Volume reconstruction finished" Feb 19 03:04:53.802717 master-0 kubenswrapper[7776]: I0219 03:04:53.802774 7776 reconciler.go:26] "Reconciler: start to sync state" Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803198 7776 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803202 7776 factory.go:55] Registering systemd factory Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803289 7776 factory.go:221] Registration of the systemd container factory successfully Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803648 7776 factory.go:153] Registering CRI-O factory Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803667 7776 factory.go:221] Registration of the crio container factory successfully Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803867 7776 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803915 7776 factory.go:103] Registering Raw factory Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.803935 7776 manager.go:1196] Started watching for new ooms in manager Feb 19 03:04:53.804653 master-0 kubenswrapper[7776]: I0219 03:04:53.804425 7776 manager.go:319] Starting recovery of all containers Feb 19 03:04:53.835095 master-0 kubenswrapper[7776]: I0219 03:04:53.835040 7776 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 03:04:53.838896 master-0 kubenswrapper[7776]: I0219 03:04:53.838828 7776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 03:04:53.841231 master-0 kubenswrapper[7776]: I0219 03:04:53.841178 7776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 03:04:53.841321 master-0 kubenswrapper[7776]: I0219 03:04:53.841243 7776 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 03:04:53.841370 master-0 kubenswrapper[7776]: I0219 03:04:53.841321 7776 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 03:04:53.841460 master-0 kubenswrapper[7776]: E0219 03:04:53.841398 7776 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 03:04:53.842911 master-0 kubenswrapper[7776]: I0219 03:04:53.842879 7776 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 03:04:53.857440 master-0 kubenswrapper[7776]: I0219 03:04:53.857349 7776 generic.go:334] "Generic (PLEG): container finished" podID="bd7240e7-9923-4485-a055-0e1364954af9" containerID="ea7babb48d9acc19a51058d43972a14b4a1ed0d3f15fadbbc95a57a23953a57e" exitCode=0 Feb 19 03:04:53.866739 master-0 kubenswrapper[7776]: I0219 03:04:53.866685 7776 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd" exitCode=0 Feb 19 03:04:53.874423 master-0 kubenswrapper[7776]: I0219 03:04:53.874369 7776 generic.go:334] "Generic (PLEG): container finished" podID="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" containerID="e1fdaebfc69e9354cdd956d93bd8b91f87df452473c04d8a78f864f320d237fa" exitCode=0 Feb 19 03:04:53.885246 master-0 kubenswrapper[7776]: I0219 03:04:53.885204 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="df79d74c2fc5980bfc6e9850c3ffca3b314448c7df3cef006d2546392b263b4e" exitCode=0 Feb 19 03:04:53.885376 master-0 kubenswrapper[7776]: I0219 03:04:53.885245 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a18d99c878639b9d3805f870752927c3437cf7b6b29a033142fd63915d0b18e8" exitCode=0 Feb 19 03:04:53.885376 master-0 kubenswrapper[7776]: I0219 03:04:53.885295 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a1adfe00d9aa195d9236868bc3cdaa7708f6f91c8e97bcc9dc23bf44a824c667" exitCode=0 Feb 19 03:04:53.885376 master-0 kubenswrapper[7776]: I0219 03:04:53.885314 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d07c6f7253d4f5bf400e52d3abf09e67dc06d685b2053d96aa22769fe9305dd6" exitCode=0 Feb 19 03:04:53.885376 master-0 kubenswrapper[7776]: I0219 03:04:53.885332 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="87ced28296b6205caeec80cb40be9541d7f81c97bea9198b50ce4babeda1daa1" exitCode=0 Feb 19 03:04:53.885376 master-0 kubenswrapper[7776]: I0219 03:04:53.885350 7776 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d7038f953677e8d7419f5a2fddb13ce55d744e0baf108c01044bd406543eeae9" exitCode=0 Feb 19 03:04:53.886869 master-0 kubenswrapper[7776]: I0219 03:04:53.886826 7776 generic.go:334] "Generic (PLEG): container finished" podID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerID="23060c94450b0089de5446d5e52f8e87d35f8af868d80c88ad4e43f6b97218f6" exitCode=0 Feb 19 03:04:53.888855 master-0 kubenswrapper[7776]: I0219 03:04:53.888832 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 19 03:04:53.889300 master-0 kubenswrapper[7776]: I0219 03:04:53.889273 7776 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" exitCode=1 Feb 19 03:04:53.889300 master-0 kubenswrapper[7776]: I0219 03:04:53.889300 7776 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8" exitCode=0 Feb 19 03:04:53.941618 master-0 kubenswrapper[7776]: E0219 03:04:53.941573 7776 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:04:53.959367 master-0 kubenswrapper[7776]: I0219 03:04:53.959308 7776 manager.go:324] Recovery completed Feb 19 03:04:54.004050 master-0 kubenswrapper[7776]: I0219 03:04:54.003972 7776 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 03:04:54.004050 master-0 kubenswrapper[7776]: I0219 03:04:54.004011 7776 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 03:04:54.004050 master-0 kubenswrapper[7776]: I0219 03:04:54.004038 7776 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:04:54.004396 master-0 kubenswrapper[7776]: I0219 03:04:54.004245 7776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 19 03:04:54.004396 master-0 kubenswrapper[7776]: I0219 03:04:54.004280 7776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 19 03:04:54.004396 master-0 kubenswrapper[7776]: I0219 03:04:54.004309 7776 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 19 03:04:54.004396 master-0 kubenswrapper[7776]: I0219 03:04:54.004319 7776 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 19 03:04:54.004396 master-0 kubenswrapper[7776]: I0219 03:04:54.004328 7776 policy_none.go:49] "None policy: Start" Feb 19 03:04:54.005976 master-0 kubenswrapper[7776]: I0219 03:04:54.005848 7776 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 03:04:54.005976 master-0 kubenswrapper[7776]: I0219 03:04:54.005907 7776 state_mem.go:35] "Initializing new in-memory state store" Feb 19 03:04:54.006241 master-0 kubenswrapper[7776]: I0219 03:04:54.006209 7776 state_mem.go:75] "Updated machine memory state" Feb 19 03:04:54.006241 master-0 kubenswrapper[7776]: I0219 03:04:54.006233 7776 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 19 03:04:54.017219 master-0 kubenswrapper[7776]: I0219 03:04:54.017179 7776 manager.go:334] "Starting Device Plugin manager" Feb 19 03:04:54.017410 master-0 kubenswrapper[7776]: I0219 03:04:54.017235 7776 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 03:04:54.017410 master-0 kubenswrapper[7776]: I0219 03:04:54.017270 7776 server.go:79] "Starting device plugin registration server" Feb 19 03:04:54.017753 master-0 kubenswrapper[7776]: I0219 03:04:54.017725 7776 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 03:04:54.017817 master-0 kubenswrapper[7776]: I0219 03:04:54.017750 7776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 03:04:54.017958 master-0 kubenswrapper[7776]: I0219 03:04:54.017929 7776 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 03:04:54.018050 master-0 kubenswrapper[7776]: I0219 03:04:54.018034 7776 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 03:04:54.018050 master-0 kubenswrapper[7776]: I0219 03:04:54.018044 7776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 03:04:54.118613 master-0 kubenswrapper[7776]: I0219 03:04:54.118555 7776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:04:54.119907 master-0 kubenswrapper[7776]: I0219 03:04:54.119864 7776 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:04:54.119969 master-0 kubenswrapper[7776]: I0219 03:04:54.119915 7776 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:04:54.119969 master-0 kubenswrapper[7776]: I0219 03:04:54.119925 7776 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:04:54.120060 master-0 kubenswrapper[7776]: I0219 03:04:54.119994 7776 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:04:54.142503 master-0 kubenswrapper[7776]: I0219 03:04:54.142338 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 19 03:04:54.143493 master-0 kubenswrapper[7776]: I0219 03:04:54.143435 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283bb664d05497a1a2860aa4ed09016f970c031a28a0d52e1f75f9e5c4763c8d" Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143481 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143549 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143561 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"10880e65f8f1292bea461c369196b5d5099f3abb559d63f3afe6c53ad3ae1a5f"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143573 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4146cefc32a1cf1a141a5a634ddc772fb63d10e2b446299bbca1aa5f88fa1c7" Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143585 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"82a40f80e34c4f63706840b48b0aa48486b2ad68c13d50974f11a3442433c7ea"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143597 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"d18413342a722838be3aeba368600d701226af1bb0655a2558eb4a099c9c2796"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143608 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd"} Feb 19 03:04:54.143638 master-0 kubenswrapper[7776]: I0219 03:04:54.143621 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143646 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143658 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143668 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"c741144c76ccb27ab8a3627dd9a2beb2d675b354f4a6e2cb399b5a08240ea149"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143678 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143689 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"13103220887a41b425edd349c524421eaa06bddd41c4d0276cf0be744cde8eaf"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143724 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26" Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143735 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5063a55beab9e17c44bf467460af64eb399204406812c9ae4e396f59fae30a15"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143747 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143770 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8"} Feb 19 03:04:54.143865 master-0 kubenswrapper[7776]: I0219 03:04:54.143783 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55"} Feb 19 03:04:54.163440 master-0 kubenswrapper[7776]: E0219 03:04:54.163356 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.164720 master-0 kubenswrapper[7776]: W0219 03:04:54.164481 7776 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 19 03:04:54.164720 master-0 kubenswrapper[7776]: E0219 03:04:54.164623 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.165035 master-0 kubenswrapper[7776]: E0219 03:04:54.164620 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.165220 master-0 kubenswrapper[7776]: E0219 03:04:54.165169 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.172054 master-0 kubenswrapper[7776]: I0219 03:04:54.172004 7776 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 19 03:04:54.172224 master-0 kubenswrapper[7776]: I0219 03:04:54.172153 7776 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 19 03:04:54.178650 master-0 kubenswrapper[7776]: E0219 03:04:54.178599 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.236249 master-0 kubenswrapper[7776]: I0219 03:04:54.236206 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236249 master-0 kubenswrapper[7776]: I0219 03:04:54.236265 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236290 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236309 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236396 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236449 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236480 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.236512 master-0 kubenswrapper[7776]: I0219 03:04:54.236501 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.236680 master-0 kubenswrapper[7776]: I0219 03:04:54.236523 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236680 master-0 kubenswrapper[7776]: I0219 03:04:54.236550 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236680 master-0 kubenswrapper[7776]: I0219 03:04:54.236572 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236680 master-0 kubenswrapper[7776]: I0219 03:04:54.236591 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.236680 master-0 kubenswrapper[7776]: I0219 03:04:54.236611 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.236802 master-0 kubenswrapper[7776]: I0219 03:04:54.236678 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.236802 master-0 kubenswrapper[7776]: I0219 03:04:54.236715 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.236802 master-0 kubenswrapper[7776]: I0219 03:04:54.236757 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.236802 master-0 kubenswrapper[7776]: I0219 03:04:54.236797 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.337484 master-0 kubenswrapper[7776]: I0219 03:04:54.337368 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.337484 master-0 kubenswrapper[7776]: I0219 03:04:54.337431 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.337484 master-0 kubenswrapper[7776]: I0219 03:04:54.337462 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.337740 master-0 kubenswrapper[7776]: I0219 03:04:54.337542 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.337740 master-0 kubenswrapper[7776]: I0219 03:04:54.337620 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.337740 master-0 kubenswrapper[7776]: I0219 03:04:54.337648 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.337872 master-0 kubenswrapper[7776]: I0219 03:04:54.337756 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.337872 master-0 kubenswrapper[7776]: I0219 03:04:54.337836 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.337953 master-0 kubenswrapper[7776]: I0219 03:04:54.337871 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.337953 master-0 kubenswrapper[7776]: I0219 03:04:54.337894 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.337953 master-0 kubenswrapper[7776]: I0219 03:04:54.337908 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.337953 master-0 kubenswrapper[7776]: I0219 03:04:54.337920 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337952 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337958 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337972 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337985 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337993 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338016 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338017 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.337995 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338040 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338055 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338063 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338083 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338096 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:54.338100 master-0 kubenswrapper[7776]: I0219 03:04:54.338106 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338131 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338137 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338151 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338333 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338335 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338414 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338375 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:54.338642 master-0 kubenswrapper[7776]: I0219 03:04:54.338393 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:54.784348 master-0 kubenswrapper[7776]: I0219 03:04:54.783924 7776 apiserver.go:52] "Watching apiserver" Feb 19 03:04:54.795182 master-0 kubenswrapper[7776]: I0219 03:04:54.795109 7776 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 03:04:54.796100 master-0 kubenswrapper[7776]: I0219 03:04:54.796051 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t","openshift-ovn-kubernetes/ovnkube-node-pw7dx","openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7","openshift-multus/multus-additional-cni-plugins-bs5qd","openshift-network-operator/network-operator-7d7db75979-jbztp","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq","openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v","openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9","openshift-marketplace/marketplace-operator-6f5488b997-xxdh5","openshift-network-diagnostics/network-check-target-c6c25","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8","openshift-network-operator/iptables-alerter-kvvll","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h","assisted-installer/assisted-installer-controller-tw8v2","openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l","openshift-network-node-identity/network-node-identity-rm5jg","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l","openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8","kube-system/bootstrap-kube-scheduler-master-0","openshift-dns-operator/dns-operator-8c7d49845-jlnvw","openshift-ingress-operator/ingress-operator-6569778c84-qcd49","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-4lzdj","openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv","openshift-multus/network-metrics-daemon-hspwc","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj"] Feb 19 03:04:54.796437 master-0 kubenswrapper[7776]: I0219 03:04:54.796396 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.796492 master-0 kubenswrapper[7776]: I0219 03:04:54.796443 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:04:54.796612 master-0 kubenswrapper[7776]: I0219 03:04:54.796565 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.796774 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.796811 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.797143 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.797477 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.797589 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.798198 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.798533 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.798566 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.801471 master-0 kubenswrapper[7776]: I0219 03:04:54.799042 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:54.807038 master-0 kubenswrapper[7776]: I0219 03:04:54.806934 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 03:04:54.807668 master-0 kubenswrapper[7776]: I0219 03:04:54.803233 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:54.807781 master-0 kubenswrapper[7776]: I0219 03:04:54.807761 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:54.808323 master-0 kubenswrapper[7776]: I0219 03:04:54.808273 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 03:04:54.808381 master-0 kubenswrapper[7776]: I0219 03:04:54.808357 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 03:04:54.808429 master-0 kubenswrapper[7776]: I0219 03:04:54.808420 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 03:04:54.808501 master-0 kubenswrapper[7776]: I0219 03:04:54.808486 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 19 03:04:54.808581 master-0 kubenswrapper[7776]: I0219 03:04:54.808565 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 03:04:54.808635 master-0 kubenswrapper[7776]: I0219 03:04:54.808617 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 03:04:54.808690 master-0 kubenswrapper[7776]: I0219 03:04:54.808675 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 03:04:54.809007 master-0 kubenswrapper[7776]: I0219 03:04:54.808960 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:54.809487 master-0 kubenswrapper[7776]: I0219 03:04:54.809461 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.809599 master-0 kubenswrapper[7776]: I0219 03:04:54.809580 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:04:54.809705 master-0 kubenswrapper[7776]: I0219 03:04:54.809677 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 03:04:54.809821 master-0 kubenswrapper[7776]: I0219 03:04:54.809800 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 03:04:54.809914 master-0 kubenswrapper[7776]: I0219 03:04:54.809892 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 03:04:54.809970 master-0 kubenswrapper[7776]: I0219 03:04:54.809802 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.810594 master-0 kubenswrapper[7776]: I0219 03:04:54.810560 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 03:04:54.810844 master-0 kubenswrapper[7776]: I0219 03:04:54.810787 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 03:04:54.810912 master-0 kubenswrapper[7776]: I0219 03:04:54.810886 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 03:04:54.814427 master-0 kubenswrapper[7776]: I0219 03:04:54.813803 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 03:04:54.814427 master-0 kubenswrapper[7776]: I0219 03:04:54.813943 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 03:04:54.814427 master-0 kubenswrapper[7776]: I0219 03:04:54.813954 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 03:04:54.814848 master-0 kubenswrapper[7776]: I0219 03:04:54.814823 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 03:04:54.815200 master-0 kubenswrapper[7776]: I0219 03:04:54.815163 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 03:04:54.815274 master-0 kubenswrapper[7776]: I0219 03:04:54.815236 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 03:04:54.815384 master-0 kubenswrapper[7776]: I0219 03:04:54.815365 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 03:04:54.815445 master-0 kubenswrapper[7776]: I0219 03:04:54.815397 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.815538 master-0 kubenswrapper[7776]: I0219 03:04:54.815517 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 03:04:54.815681 master-0 kubenswrapper[7776]: I0219 03:04:54.815657 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 03:04:54.815770 master-0 kubenswrapper[7776]: I0219 03:04:54.815746 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 03:04:54.815770 master-0 kubenswrapper[7776]: I0219 03:04:54.815764 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.815945 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816170 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816319 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816326 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816426 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816531 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816640 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.816922 master-0 kubenswrapper[7776]: I0219 03:04:54.816744 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 03:04:54.817629 master-0 kubenswrapper[7776]: I0219 03:04:54.817245 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 03:04:54.817723 master-0 kubenswrapper[7776]: I0219 03:04:54.817308 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.817780 master-0 kubenswrapper[7776]: I0219 03:04:54.817524 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 03:04:54.817821 master-0 kubenswrapper[7776]: I0219 03:04:54.817564 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 19 03:04:54.817895 master-0 kubenswrapper[7776]: I0219 03:04:54.817593 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 03:04:54.817946 master-0 kubenswrapper[7776]: I0219 03:04:54.817934 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 03:04:54.817987 master-0 kubenswrapper[7776]: I0219 03:04:54.817954 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 19 03:04:54.818158 master-0 kubenswrapper[7776]: I0219 03:04:54.818132 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 03:04:54.818332 master-0 kubenswrapper[7776]: I0219 03:04:54.818312 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 03:04:54.818510 master-0 kubenswrapper[7776]: I0219 03:04:54.818490 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 03:04:54.819481 master-0 kubenswrapper[7776]: I0219 03:04:54.818139 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 03:04:54.819747 master-0 kubenswrapper[7776]: I0219 03:04:54.819660 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 03:04:54.820195 master-0 kubenswrapper[7776]: I0219 03:04:54.820162 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 03:04:54.821329 master-0 kubenswrapper[7776]: I0219 03:04:54.821233 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 03:04:54.821388 master-0 kubenswrapper[7776]: I0219 03:04:54.821351 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 03:04:54.821974 master-0 kubenswrapper[7776]: I0219 03:04:54.821948 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 03:04:54.826780 master-0 kubenswrapper[7776]: I0219 03:04:54.826751 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.827438 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.827703 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.827859 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.827903 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.827988 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.828152 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.828155 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 03:04:54.828333 master-0 kubenswrapper[7776]: I0219 03:04:54.828335 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.828697 master-0 kubenswrapper[7776]: I0219 03:04:54.828645 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 03:04:54.828987 master-0 kubenswrapper[7776]: I0219 03:04:54.828827 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 03:04:54.828987 master-0 kubenswrapper[7776]: I0219 03:04:54.828929 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.829248 master-0 kubenswrapper[7776]: I0219 03:04:54.829228 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 03:04:54.829421 master-0 kubenswrapper[7776]: I0219 03:04:54.829402 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 03:04:54.829512 master-0 kubenswrapper[7776]: I0219 03:04:54.829478 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 03:04:54.829646 master-0 kubenswrapper[7776]: I0219 03:04:54.829562 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 03:04:54.829752 master-0 kubenswrapper[7776]: I0219 03:04:54.829661 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.829909 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.829960 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.829988 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830034 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.829912 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830172 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830210 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830267 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830356 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830387 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830447 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830463 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.830758 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.831039 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:04:54.831223 master-0 kubenswrapper[7776]: I0219 03:04:54.831149 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831299 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831371 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831585 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831594 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831626 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 19 03:04:54.831753 master-0 kubenswrapper[7776]: I0219 03:04:54.831653 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 03:04:54.832315 master-0 kubenswrapper[7776]: I0219 03:04:54.832002 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 03:04:54.832315 master-0 kubenswrapper[7776]: I0219 03:04:54.832162 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 03:04:54.836087 master-0 kubenswrapper[7776]: I0219 03:04:54.835845 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.836087 master-0 kubenswrapper[7776]: I0219 03:04:54.835960 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 03:04:54.840573 master-0 kubenswrapper[7776]: I0219 03:04:54.840542 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:54.840652 master-0 kubenswrapper[7776]: I0219 03:04:54.840587 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:54.840652 master-0 kubenswrapper[7776]: I0219 03:04:54.840613 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.840652 master-0 kubenswrapper[7776]: I0219 03:04:54.840636 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.840736 master-0 kubenswrapper[7776]: I0219 03:04:54.840668 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:54.840736 master-0 kubenswrapper[7776]: I0219 03:04:54.840692 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.840736 master-0 kubenswrapper[7776]: I0219 03:04:54.840719 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.840817 master-0 kubenswrapper[7776]: I0219 03:04:54.840741 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.840817 master-0 kubenswrapper[7776]: I0219 03:04:54.840764 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:54.840817 master-0 kubenswrapper[7776]: I0219 03:04:54.840785 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.840817 master-0 kubenswrapper[7776]: I0219 03:04:54.840806 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.840922 master-0 kubenswrapper[7776]: I0219 03:04:54.840829 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.840922 master-0 kubenswrapper[7776]: I0219 03:04:54.840854 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:54.840922 master-0 kubenswrapper[7776]: I0219 03:04:54.840896 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.841008 master-0 kubenswrapper[7776]: I0219 03:04:54.840922 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.841008 master-0 kubenswrapper[7776]: I0219 03:04:54.840946 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.841008 master-0 kubenswrapper[7776]: I0219 03:04:54.840969 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:54.841008 master-0 kubenswrapper[7776]: I0219 03:04:54.840994 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:54.841107 master-0 kubenswrapper[7776]: I0219 03:04:54.841019 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:54.841107 master-0 kubenswrapper[7776]: I0219 03:04:54.841044 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.841107 master-0 kubenswrapper[7776]: I0219 03:04:54.841087 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.841186 master-0 kubenswrapper[7776]: I0219 03:04:54.841117 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.841186 master-0 kubenswrapper[7776]: I0219 03:04:54.841142 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.841186 master-0 kubenswrapper[7776]: I0219 03:04:54.841182 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.841290 master-0 kubenswrapper[7776]: I0219 03:04:54.841208 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.841290 master-0 kubenswrapper[7776]: I0219 03:04:54.841233 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:54.841290 master-0 kubenswrapper[7776]: I0219 03:04:54.841282 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:54.841376 master-0 kubenswrapper[7776]: I0219 03:04:54.841307 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:54.841376 master-0 kubenswrapper[7776]: I0219 03:04:54.841349 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.841376 master-0 kubenswrapper[7776]: I0219 03:04:54.841350 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.841487 master-0 kubenswrapper[7776]: I0219 03:04:54.841390 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.841487 master-0 kubenswrapper[7776]: I0219 03:04:54.841416 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.841487 master-0 kubenswrapper[7776]: I0219 03:04:54.841440 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.841487 master-0 kubenswrapper[7776]: I0219 03:04:54.841479 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:54.841590 master-0 kubenswrapper[7776]: I0219 03:04:54.841515 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:54.841590 master-0 kubenswrapper[7776]: I0219 03:04:54.841543 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:54.841590 master-0 kubenswrapper[7776]: I0219 03:04:54.841581 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.841669 master-0 kubenswrapper[7776]: I0219 03:04:54.841605 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.841669 master-0 kubenswrapper[7776]: I0219 03:04:54.841630 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:54.841669 master-0 kubenswrapper[7776]: I0219 03:04:54.841654 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.841746 master-0 kubenswrapper[7776]: I0219 03:04:54.841673 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.841746 master-0 kubenswrapper[7776]: I0219 03:04:54.841679 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.841801 master-0 kubenswrapper[7776]: I0219 03:04:54.841761 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.841801 master-0 kubenswrapper[7776]: I0219 03:04:54.841763 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.841857 master-0 kubenswrapper[7776]: I0219 03:04:54.841806 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:54.841857 master-0 kubenswrapper[7776]: I0219 03:04:54.841836 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:54.841909 master-0 kubenswrapper[7776]: I0219 03:04:54.841862 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:54.841909 master-0 kubenswrapper[7776]: I0219 03:04:54.841889 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.841964 master-0 kubenswrapper[7776]: I0219 03:04:54.841913 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:54.841964 master-0 kubenswrapper[7776]: I0219 03:04:54.841938 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.842015 master-0 kubenswrapper[7776]: I0219 03:04:54.841964 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:54.842015 master-0 kubenswrapper[7776]: I0219 03:04:54.841989 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:54.842072 master-0 kubenswrapper[7776]: I0219 03:04:54.842014 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:54.842072 master-0 kubenswrapper[7776]: I0219 03:04:54.842040 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:54.842072 master-0 kubenswrapper[7776]: I0219 03:04:54.842064 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:54.842151 master-0 kubenswrapper[7776]: I0219 03:04:54.842066 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.842151 master-0 kubenswrapper[7776]: I0219 03:04:54.842070 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.841939 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842480 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842487 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842493 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842540 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842604 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842649 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842669 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842676 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842735 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842738 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842763 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842790 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842812 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842957 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842991 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843001 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843013 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.842987 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843052 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843087 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843106 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843089 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843146 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843155 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843177 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843180 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843370 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843370 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843386 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843407 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843422 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843497 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843516 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843496 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.843560 master-0 kubenswrapper[7776]: I0219 03:04:54.843519 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.843857 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844366 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844425 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844455 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844553 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844584 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844613 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.844798 master-0 kubenswrapper[7776]: I0219 03:04:54.844789 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 03:04:54.845076 master-0 kubenswrapper[7776]: I0219 03:04:54.844818 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:54.845076 master-0 kubenswrapper[7776]: I0219 03:04:54.844957 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.845076 master-0 kubenswrapper[7776]: I0219 03:04:54.844998 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:54.845901 master-0 kubenswrapper[7776]: I0219 03:04:54.845866 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.846075 master-0 kubenswrapper[7776]: I0219 03:04:54.846041 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.846124 master-0 kubenswrapper[7776]: I0219 03:04:54.846101 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:54.846182 master-0 kubenswrapper[7776]: I0219 03:04:54.846162 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:54.846429 master-0 kubenswrapper[7776]: I0219 03:04:54.846404 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:54.847238 master-0 kubenswrapper[7776]: I0219 03:04:54.847202 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 03:04:54.847443 master-0 kubenswrapper[7776]: I0219 03:04:54.847404 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 03:04:54.848442 master-0 kubenswrapper[7776]: I0219 03:04:54.848403 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:54.853215 master-0 kubenswrapper[7776]: I0219 03:04:54.853168 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:54.853781 master-0 kubenswrapper[7776]: I0219 03:04:54.853687 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.866912 master-0 kubenswrapper[7776]: I0219 03:04:54.866856 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 03:04:54.874020 master-0 kubenswrapper[7776]: I0219 03:04:54.873971 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:54.886182 master-0 kubenswrapper[7776]: I0219 03:04:54.886131 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:04:54.899671 master-0 kubenswrapper[7776]: I0219 03:04:54.899635 7776 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 19 03:04:54.906555 master-0 kubenswrapper[7776]: I0219 03:04:54.906520 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:04:54.926534 master-0 kubenswrapper[7776]: I0219 03:04:54.926424 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 19 03:04:54.946499 master-0 kubenswrapper[7776]: I0219 03:04:54.946437 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 19 03:04:54.947000 master-0 kubenswrapper[7776]: I0219 03:04:54.946962 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.947052 master-0 kubenswrapper[7776]: I0219 03:04:54.946997 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.947052 master-0 kubenswrapper[7776]: I0219 03:04:54.947036 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.947135 master-0 kubenswrapper[7776]: I0219 03:04:54.947073 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:54.947135 master-0 kubenswrapper[7776]: I0219 03:04:54.947101 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.947135 master-0 kubenswrapper[7776]: I0219 03:04:54.947125 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947229 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: E0219 03:04:54.947294 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947397 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: E0219 03:04:54.947431 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.447408251 +0000 UTC m=+1.787092779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947459 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947571 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947573 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947702 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947730 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947778 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947790 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947804 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947893 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947909 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947930 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947950 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.947966 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948008 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948028 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948042 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948079 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948112 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948139 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948143 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.948346 master-0 kubenswrapper[7776]: I0219 03:04:54.948159 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948446 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948489 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948590 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: E0219 03:04:54.948601 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: E0219 03:04:54.948657 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.448638914 +0000 UTC m=+1.788323442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948827 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948834 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948866 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948935 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.948961 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949006 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949042 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949065 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: E0219 03:04:54.949125 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: E0219 03:04:54.949161 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.449149878 +0000 UTC m=+1.788834396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949240 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949300 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949335 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949473 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:54.949492 master-0 kubenswrapper[7776]: I0219 03:04:54.949494 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949521 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949523 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949619 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949640 7776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949675 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.449659582 +0000 UTC m=+1.789344090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949677 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949688 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.449683272 +0000 UTC m=+1.789367790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949712 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.949728 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.449709953 +0000 UTC m=+1.789394601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949742 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949858 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949925 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.949964 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.950016 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.950046 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: I0219 03:04:54.950085 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.950134 7776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:54.950140 master-0 kubenswrapper[7776]: E0219 03:04:54.950160 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.450152465 +0000 UTC m=+1.789836983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950174 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950232 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950285 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950316 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950360 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950389 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950424 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950448 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950615 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.950637 master-0 kubenswrapper[7776]: I0219 03:04:54.950619 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: I0219 03:04:54.950659 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: I0219 03:04:54.950687 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: I0219 03:04:54.950714 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: I0219 03:04:54.950722 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: E0219 03:04:54.950782 7776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: I0219 03:04:54.950785 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: E0219 03:04:54.950802 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: E0219 03:04:54.950837 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.450827673 +0000 UTC m=+1.790512191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:54.950877 master-0 kubenswrapper[7776]: E0219 03:04:54.950870 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.450863184 +0000 UTC m=+1.790547702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:54.951161 master-0 kubenswrapper[7776]: I0219 03:04:54.950934 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.951161 master-0 kubenswrapper[7776]: I0219 03:04:54.951013 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.951161 master-0 kubenswrapper[7776]: I0219 03:04:54.951037 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.951161 master-0 kubenswrapper[7776]: I0219 03:04:54.951056 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.951161 master-0 kubenswrapper[7776]: I0219 03:04:54.951125 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.951309 master-0 kubenswrapper[7776]: I0219 03:04:54.951215 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.951309 master-0 kubenswrapper[7776]: I0219 03:04:54.951244 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:54.951403 master-0 kubenswrapper[7776]: I0219 03:04:54.951319 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.951435 master-0 kubenswrapper[7776]: I0219 03:04:54.951417 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:54.951467 master-0 kubenswrapper[7776]: I0219 03:04:54.951447 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:54.951503 master-0 kubenswrapper[7776]: I0219 03:04:54.951473 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:54.951503 master-0 kubenswrapper[7776]: I0219 03:04:54.951497 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.951572 master-0 kubenswrapper[7776]: I0219 03:04:54.951519 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:54.951572 master-0 kubenswrapper[7776]: I0219 03:04:54.951560 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:54.951635 master-0 kubenswrapper[7776]: I0219 03:04:54.951587 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.951635 master-0 kubenswrapper[7776]: I0219 03:04:54.951617 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.951746 master-0 kubenswrapper[7776]: I0219 03:04:54.951643 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.951775 master-0 kubenswrapper[7776]: I0219 03:04:54.951752 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.952430 master-0 kubenswrapper[7776]: I0219 03:04:54.952401 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:54.954613 master-0 kubenswrapper[7776]: I0219 03:04:54.954575 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:54.954698 master-0 kubenswrapper[7776]: E0219 03:04:54.954616 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:54.954698 master-0 kubenswrapper[7776]: I0219 03:04:54.954652 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:54.954698 master-0 kubenswrapper[7776]: E0219 03:04:54.954664 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.454651125 +0000 UTC m=+1.794335643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:54.954841 master-0 kubenswrapper[7776]: I0219 03:04:54.954716 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:54.954841 master-0 kubenswrapper[7776]: E0219 03:04:54.954793 7776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:54.954930 master-0 kubenswrapper[7776]: E0219 03:04:54.954841 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.45482868 +0000 UTC m=+1.794513208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:54.955046 master-0 kubenswrapper[7776]: I0219 03:04:54.955011 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:54.955089 master-0 kubenswrapper[7776]: I0219 03:04:54.955006 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.002229 master-0 kubenswrapper[7776]: I0219 03:04:55.002173 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:04:55.016758 master-0 kubenswrapper[7776]: I0219 03:04:55.016731 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:04:55.016972 master-0 kubenswrapper[7776]: I0219 03:04:55.016911 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:55.021782 master-0 kubenswrapper[7776]: I0219 03:04:55.021743 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:55.037436 master-0 kubenswrapper[7776]: I0219 03:04:55.037337 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:55.053126 master-0 kubenswrapper[7776]: I0219 03:04:55.053073 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053273 master-0 kubenswrapper[7776]: I0219 03:04:55.053128 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053273 master-0 kubenswrapper[7776]: I0219 03:04:55.053157 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053273 master-0 kubenswrapper[7776]: I0219 03:04:55.053184 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.053434 master-0 kubenswrapper[7776]: I0219 03:04:55.053282 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053434 master-0 kubenswrapper[7776]: I0219 03:04:55.053302 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053434 master-0 kubenswrapper[7776]: I0219 03:04:55.053347 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.053434 master-0 kubenswrapper[7776]: I0219 03:04:55.053373 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053434 master-0 kubenswrapper[7776]: I0219 03:04:55.053412 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053449 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053463 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053479 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053508 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053527 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053509 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053566 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053592 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: E0219 03:04:55.053604 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053624 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: E0219 03:04:55.053648 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.55363248 +0000 UTC m=+1.893316998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: I0219 03:04:55.053704 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.053711 master-0 kubenswrapper[7776]: E0219 03:04:55.053720 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053746 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: E0219 03:04:55.053768 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:55.553753113 +0000 UTC m=+1.893437631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053775 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053787 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053799 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053811 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053851 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053879 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053884 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053959 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.053987 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054174 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054171 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054200 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054211 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054222 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054246 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054297 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054360 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054377 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054405 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054420 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054448 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054477 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054534 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054576 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054631 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054664 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054776 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054812 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054845 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054884 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054891 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054948 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054977 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.054988 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055020 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055025 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055051 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055087 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055199 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055216 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055233 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055247 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055295 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055337 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055454 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:55.055465 master-0 kubenswrapper[7776]: I0219 03:04:55.055536 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.055557 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.055671 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.055744 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.056196 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.056296 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.058134 master-0 kubenswrapper[7776]: I0219 03:04:55.057625 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:55.077923 master-0 kubenswrapper[7776]: I0219 03:04:55.077853 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:04:55.097760 master-0 kubenswrapper[7776]: I0219 03:04:55.097705 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:04:55.102612 master-0 kubenswrapper[7776]: I0219 03:04:55.102161 7776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:04:55.118938 master-0 kubenswrapper[7776]: I0219 03:04:55.118884 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:55.139337 master-0 kubenswrapper[7776]: I0219 03:04:55.139249 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:55.158362 master-0 kubenswrapper[7776]: I0219 03:04:55.158294 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:04:55.183807 master-0 kubenswrapper[7776]: I0219 03:04:55.183728 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:55.205370 master-0 kubenswrapper[7776]: I0219 03:04:55.205309 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:55.217648 master-0 kubenswrapper[7776]: I0219 03:04:55.217604 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:55.237475 master-0 kubenswrapper[7776]: I0219 03:04:55.237384 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:04:55.258852 master-0 kubenswrapper[7776]: I0219 03:04:55.258803 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:04:55.277527 master-0 kubenswrapper[7776]: I0219 03:04:55.277480 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:55.297753 master-0 kubenswrapper[7776]: I0219 03:04:55.297656 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:04:55.318703 master-0 kubenswrapper[7776]: I0219 03:04:55.318653 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:55.339314 master-0 kubenswrapper[7776]: I0219 03:04:55.339272 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:55.359474 master-0 kubenswrapper[7776]: I0219 03:04:55.359331 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:04:55.378928 master-0 kubenswrapper[7776]: I0219 03:04:55.378871 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:04:55.397578 master-0 kubenswrapper[7776]: I0219 03:04:55.397517 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:04:55.417212 master-0 kubenswrapper[7776]: I0219 03:04:55.417159 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:04:55.437627 master-0 kubenswrapper[7776]: I0219 03:04:55.437539 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:55.457478 master-0 kubenswrapper[7776]: I0219 03:04:55.457402 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:04:55.462325 master-0 kubenswrapper[7776]: I0219 03:04:55.462277 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:55.462325 master-0 kubenswrapper[7776]: I0219 03:04:55.462320 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:55.462472 master-0 kubenswrapper[7776]: I0219 03:04:55.462340 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:55.462472 master-0 kubenswrapper[7776]: E0219 03:04:55.462443 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:55.462595 master-0 kubenswrapper[7776]: I0219 03:04:55.462545 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:55.462671 master-0 kubenswrapper[7776]: I0219 03:04:55.462650 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:55.462714 master-0 kubenswrapper[7776]: E0219 03:04:55.462670 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:55.462714 master-0 kubenswrapper[7776]: E0219 03:04:55.462689 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.46266375 +0000 UTC m=+2.802348268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: E0219 03:04:55.462723 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462713411 +0000 UTC m=+2.802397929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: I0219 03:04:55.462751 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: I0219 03:04:55.462773 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: E0219 03:04:55.462755 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: E0219 03:04:55.462796 7776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:55.462804 master-0 kubenswrapper[7776]: E0219 03:04:55.462804 7776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462812 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462805854 +0000 UTC m=+2.802490372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462833 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462822884 +0000 UTC m=+2.802507512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462845 7776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462875 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462856245 +0000 UTC m=+2.802540843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462898 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462888356 +0000 UTC m=+2.802573024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462919 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: I0219 03:04:55.462929 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.462950 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.462939107 +0000 UTC m=+2.802623715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: I0219 03:04:55.462985 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:55.463020 master-0 kubenswrapper[7776]: E0219 03:04:55.463003 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: I0219 03:04:55.463034 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463064 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.4630538 +0000 UTC m=+2.802738318 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463108 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463153 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.463136572 +0000 UTC m=+2.802821110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463200 7776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: I0219 03:04:55.463222 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463232 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.463223345 +0000 UTC m=+2.802907953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463295 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:55.464413 master-0 kubenswrapper[7776]: E0219 03:04:55.463333 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.463321887 +0000 UTC m=+2.803006405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:55.480074 master-0 kubenswrapper[7776]: I0219 03:04:55.479907 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:55.491750 master-0 kubenswrapper[7776]: E0219 03:04:55.491697 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:04:55.512365 master-0 kubenswrapper[7776]: E0219 03:04:55.512320 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:55.530120 master-0 kubenswrapper[7776]: E0219 03:04:55.530075 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:04:55.554990 master-0 kubenswrapper[7776]: E0219 03:04:55.553480 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:55.563922 master-0 kubenswrapper[7776]: I0219 03:04:55.563881 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:55.564215 master-0 kubenswrapper[7776]: E0219 03:04:55.564168 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:55.564291 master-0 kubenswrapper[7776]: E0219 03:04:55.564278 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.564242864 +0000 UTC m=+2.903927452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:55.564341 master-0 kubenswrapper[7776]: I0219 03:04:55.564324 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:55.564539 master-0 kubenswrapper[7776]: E0219 03:04:55.564505 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:04:55.564595 master-0 kubenswrapper[7776]: E0219 03:04:55.564566 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:56.564550192 +0000 UTC m=+2.904234770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:04:55.571845 master-0 kubenswrapper[7776]: W0219 03:04:55.571802 7776 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 19 03:04:55.571919 master-0 kubenswrapper[7776]: E0219 03:04:55.571859 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:04:55.595241 master-0 kubenswrapper[7776]: I0219 03:04:55.595201 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:04:55.615993 master-0 kubenswrapper[7776]: I0219 03:04:55.615946 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:04:55.636457 master-0 kubenswrapper[7776]: I0219 03:04:55.636410 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:55.658844 master-0 kubenswrapper[7776]: I0219 03:04:55.658796 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:04:55.685555 master-0 kubenswrapper[7776]: I0219 03:04:55.685460 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:04:55.704274 master-0 kubenswrapper[7776]: I0219 03:04:55.704145 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:04:55.720698 master-0 kubenswrapper[7776]: I0219 03:04:55.720652 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:55.738444 master-0 kubenswrapper[7776]: I0219 03:04:55.738385 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:04:55.779798 master-0 kubenswrapper[7776]: I0219 03:04:55.779736 7776 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 03:04:55.785981 master-0 kubenswrapper[7776]: I0219 03:04:55.785932 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:55.869988 master-0 kubenswrapper[7776]: E0219 03:04:55.869918 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" Feb 19 03:04:55.870245 master-0 kubenswrapper[7776]: E0219 03:04:55.870192 7776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8p8qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-8586dccc9b-mcz8l_openshift-apiserver-operator(fbc2f7d0-4bae-4d4a-b041-a624ec2b9333): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 03:04:55.871548 master-0 kubenswrapper[7776]: E0219 03:04:55.871490 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" podUID="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" Feb 19 03:04:56.009610 master-0 kubenswrapper[7776]: I0219 03:04:56.009557 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:04:56.474682 master-0 kubenswrapper[7776]: I0219 03:04:56.474620 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:56.474682 master-0 kubenswrapper[7776]: I0219 03:04:56.474682 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474733 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474761 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474786 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474809 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474860 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474884 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:56.474918 master-0 kubenswrapper[7776]: I0219 03:04:56.474911 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:56.475116 master-0 kubenswrapper[7776]: I0219 03:04:56.474945 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:56.475116 master-0 kubenswrapper[7776]: I0219 03:04:56.474975 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:56.475271 master-0 kubenswrapper[7776]: E0219 03:04:56.475222 7776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:56.475340 master-0 kubenswrapper[7776]: E0219 03:04:56.475315 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475296176 +0000 UTC m=+4.814980694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:56.475461 master-0 kubenswrapper[7776]: E0219 03:04:56.475408 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:56.475491 master-0 kubenswrapper[7776]: E0219 03:04:56.475468 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:56.475521 master-0 kubenswrapper[7776]: E0219 03:04:56.475437 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:56.475521 master-0 kubenswrapper[7776]: E0219 03:04:56.475507 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475497742 +0000 UTC m=+4.815182270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:56.475595 master-0 kubenswrapper[7776]: E0219 03:04:56.475538 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:56.475595 master-0 kubenswrapper[7776]: E0219 03:04:56.475547 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475526013 +0000 UTC m=+4.815210601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:56.475595 master-0 kubenswrapper[7776]: E0219 03:04:56.475567 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475558553 +0000 UTC m=+4.815243181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:56.475595 master-0 kubenswrapper[7776]: E0219 03:04:56.475580 7776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:56.475595 master-0 kubenswrapper[7776]: E0219 03:04:56.475582 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475575314 +0000 UTC m=+4.815259942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475601 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475594814 +0000 UTC m=+4.815279332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475634 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475653 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475646826 +0000 UTC m=+4.815331344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475686 7776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475702 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475697177 +0000 UTC m=+4.815381695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475730 7776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:56.475750 master-0 kubenswrapper[7776]: E0219 03:04:56.475745 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475740528 +0000 UTC m=+4.815425046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:56.476022 master-0 kubenswrapper[7776]: E0219 03:04:56.475774 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:56.476022 master-0 kubenswrapper[7776]: E0219 03:04:56.475790 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.47578523 +0000 UTC m=+4.815469748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:56.476022 master-0 kubenswrapper[7776]: E0219 03:04:56.475844 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:56.476022 master-0 kubenswrapper[7776]: E0219 03:04:56.475883 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.475870982 +0000 UTC m=+4.815555570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:56.576012 master-0 kubenswrapper[7776]: I0219 03:04:56.575918 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:56.576231 master-0 kubenswrapper[7776]: E0219 03:04:56.576083 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:04:56.576231 master-0 kubenswrapper[7776]: I0219 03:04:56.576136 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:56.576231 master-0 kubenswrapper[7776]: E0219 03:04:56.576153 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.576135951 +0000 UTC m=+4.915820529 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:04:56.576389 master-0 kubenswrapper[7776]: E0219 03:04:56.576322 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:56.576583 master-0 kubenswrapper[7776]: E0219 03:04:56.576414 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:04:58.576385078 +0000 UTC m=+4.916069616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:56.811681 master-0 kubenswrapper[7776]: E0219 03:04:56.811551 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 19 03:04:56.812120 master-0 kubenswrapper[7776]: E0219 03:04:56.811756 7776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tqm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-kvvll_openshift-network-operator(decd8c56-e0f0-4119-917f-56652c8f8372): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 03:04:56.812983 master-0 kubenswrapper[7776]: E0219 03:04:56.812943 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-kvvll" podUID="decd8c56-e0f0-4119-917f-56652c8f8372" Feb 19 03:04:57.324448 master-0 kubenswrapper[7776]: E0219 03:04:57.323630 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" Feb 19 03:04:57.324448 master-0 kubenswrapper[7776]: E0219 03:04:57.323830 7776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-fc889cfd5-866f9_openshift-kube-storage-version-migrator-operator(2b9d54aa-5f71-4a82-8e71-401ed3083a13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 03:04:57.325113 master-0 kubenswrapper[7776]: E0219 03:04:57.325059 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" podUID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: E0219 03:04:57.847686 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: E0219 03:04:57.848133 7776 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: echo "Copying system trust bundle" Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: fi Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.33_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mj4rq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-5bd7c86784-cjz9l_openshift-authentication-operator(b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 19 03:04:57.848783 master-0 kubenswrapper[7776]: > logger="UnhandledError" Feb 19 03:04:57.851581 master-0 kubenswrapper[7776]: E0219 03:04:57.851401 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" podUID="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" Feb 19 03:04:57.905387 master-0 kubenswrapper[7776]: I0219 03:04:57.904931 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:57.911102 master-0 kubenswrapper[7776]: I0219 03:04:57.911055 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:58.180206 master-0 kubenswrapper[7776]: E0219 03:04:58.180132 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" Feb 19 03:04:58.180402 master-0 kubenswrapper[7776]: E0219 03:04:58.180332 7776 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqt9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000340000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-5bd7768f54-f8dfs_openshift-cluster-olm-operator(1f9e07d3-d157-4948-84a6-04b8aa7eef4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 03:04:58.181539 master-0 kubenswrapper[7776]: E0219 03:04:58.181495 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" podUID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" Feb 19 03:04:58.515827 master-0 kubenswrapper[7776]: I0219 03:04:58.515682 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:04:58.515827 master-0 kubenswrapper[7776]: I0219 03:04:58.515749 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:04:58.515827 master-0 kubenswrapper[7776]: I0219 03:04:58.515773 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:04:58.515827 master-0 kubenswrapper[7776]: I0219 03:04:58.515800 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:04:58.515827 master-0 kubenswrapper[7776]: I0219 03:04:58.515827 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:04:58.516116 master-0 kubenswrapper[7776]: E0219 03:04:58.516024 7776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:04:58.516116 master-0 kubenswrapper[7776]: I0219 03:04:58.516091 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:04:58.516183 master-0 kubenswrapper[7776]: E0219 03:04:58.516145 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516115089 +0000 UTC m=+8.855799647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:04:58.516220 master-0 kubenswrapper[7776]: I0219 03:04:58.516193 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:04:58.516248 master-0 kubenswrapper[7776]: E0219 03:04:58.516239 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:04:58.516354 master-0 kubenswrapper[7776]: E0219 03:04:58.516316 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516293824 +0000 UTC m=+8.855978352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:04:58.516354 master-0 kubenswrapper[7776]: I0219 03:04:58.516345 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:58.516435 master-0 kubenswrapper[7776]: I0219 03:04:58.516379 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:04:58.516435 master-0 kubenswrapper[7776]: E0219 03:04:58.516395 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:04:58.516435 master-0 kubenswrapper[7776]: E0219 03:04:58.516434 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:58.516507 master-0 kubenswrapper[7776]: E0219 03:04:58.516438 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516424737 +0000 UTC m=+8.856109295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:04:58.516507 master-0 kubenswrapper[7776]: E0219 03:04:58.516458 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516450738 +0000 UTC m=+8.856135246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:04:58.516507 master-0 kubenswrapper[7776]: I0219 03:04:58.516398 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:04:58.516507 master-0 kubenswrapper[7776]: I0219 03:04:58.516484 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:04:58.516507 master-0 kubenswrapper[7776]: E0219 03:04:58.516488 7776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:58.516642 master-0 kubenswrapper[7776]: E0219 03:04:58.516517 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:58.516642 master-0 kubenswrapper[7776]: E0219 03:04:58.516541 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.51652693 +0000 UTC m=+8.856211448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:04:58.516642 master-0 kubenswrapper[7776]: E0219 03:04:58.516564 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516550291 +0000 UTC m=+8.856234839 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:04:58.516642 master-0 kubenswrapper[7776]: E0219 03:04:58.516598 7776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:58.516642 master-0 kubenswrapper[7776]: E0219 03:04:58.516621 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516614112 +0000 UTC m=+8.856298630 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:04:58.516802 master-0 kubenswrapper[7776]: E0219 03:04:58.516664 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:04:58.516802 master-0 kubenswrapper[7776]: E0219 03:04:58.516698 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516688184 +0000 UTC m=+8.856372702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:04:58.516802 master-0 kubenswrapper[7776]: E0219 03:04:58.516775 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:04:58.516913 master-0 kubenswrapper[7776]: E0219 03:04:58.516816 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:04:58.516913 master-0 kubenswrapper[7776]: E0219 03:04:58.516822 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516809097 +0000 UTC m=+8.856493645 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:04:58.516913 master-0 kubenswrapper[7776]: E0219 03:04:58.516842 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.516835908 +0000 UTC m=+8.856520426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:04:58.516913 master-0 kubenswrapper[7776]: E0219 03:04:58.516884 7776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:04:58.516913 master-0 kubenswrapper[7776]: E0219 03:04:58.516914 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.51690369 +0000 UTC m=+8.856588328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:04:58.617561 master-0 kubenswrapper[7776]: I0219 03:04:58.617479 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:04:58.617758 master-0 kubenswrapper[7776]: E0219 03:04:58.617681 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:04:58.617800 master-0 kubenswrapper[7776]: E0219 03:04:58.617778 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.617759355 +0000 UTC m=+8.957443873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:04:58.618098 master-0 kubenswrapper[7776]: I0219 03:04:58.618048 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:04:58.619028 master-0 kubenswrapper[7776]: E0219 03:04:58.618236 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:04:58.619028 master-0 kubenswrapper[7776]: E0219 03:04:58.618322 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.61830452 +0000 UTC m=+8.957989128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:04:58.668737 master-0 kubenswrapper[7776]: E0219 03:04:58.668679 7776 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" Feb 19 03:04:58.668900 master-0 kubenswrapper[7776]: E0219 03:04:58.668867 7776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d953b34fe1ab03e9a57b3c91de4220683cf92e804edb5f5c230e5888e1c5a6d2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhmpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000150000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-6fb4df594f-mtqxj_openshift-cluster-storage-operator(d6fae256-6a2e-45e7-8f2f-d471f46ad3b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 03:04:58.670107 master-0 kubenswrapper[7776]: E0219 03:04:58.670071 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" podUID="d6fae256-6a2e-45e7-8f2f-d471f46ad3b2" Feb 19 03:04:58.912056 master-0 kubenswrapper[7776]: I0219 03:04:58.911283 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-c6c25"] Feb 19 03:04:58.927065 master-0 kubenswrapper[7776]: I0219 03:04:58.927014 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed"} Feb 19 03:04:58.928583 master-0 kubenswrapper[7776]: I0219 03:04:58.928553 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-c6c25" event={"ID":"4fd49d14-d513-4f68-8a87-3cef8a033c58","Type":"ContainerStarted","Data":"c34b9543f3e2068cde8c2b7bd9a04ad41c16f834956cffb18edf070cdda1c25d"} Feb 19 03:04:58.935237 master-0 kubenswrapper[7776]: I0219 03:04:58.935196 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="c20b9e1e7e9550aa5bfbad939d9f66144cfef2538d416de2194bb171ea06814d" exitCode=0 Feb 19 03:04:58.935352 master-0 kubenswrapper[7776]: I0219 03:04:58.935330 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"c20b9e1e7e9550aa5bfbad939d9f66144cfef2538d416de2194bb171ea06814d"} Feb 19 03:04:58.939175 master-0 kubenswrapper[7776]: I0219 03:04:58.938885 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc"} Feb 19 03:04:59.015889 master-0 kubenswrapper[7776]: I0219 03:04:59.015839 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:59.016060 master-0 kubenswrapper[7776]: I0219 03:04:59.015998 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:04:59.023786 master-0 kubenswrapper[7776]: I0219 03:04:59.020338 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 19 03:04:59.584690 master-0 kubenswrapper[7776]: I0219 03:04:59.584628 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:04:59.584934 master-0 kubenswrapper[7776]: I0219 03:04:59.584758 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:04:59.943087 master-0 kubenswrapper[7776]: I0219 03:04:59.943044 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerStarted","Data":"24791f1c363b144877c645c4f1432f887b6ed95f1fe6b262a78611e4e7415851"} Feb 19 03:04:59.946320 master-0 kubenswrapper[7776]: I0219 03:04:59.945151 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerStarted","Data":"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2"} Feb 19 03:04:59.956485 master-0 kubenswrapper[7776]: I0219 03:04:59.956433 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"86c664ab293aa817dc19559e0b69114daede98d8ba6acf0a72b18f40ca2b5774"} Feb 19 03:05:00.007766 master-0 kubenswrapper[7776]: I0219 03:05:00.007704 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-c6c25" event={"ID":"4fd49d14-d513-4f68-8a87-3cef8a033c58","Type":"ContainerStarted","Data":"71cf17a7746a7bcc627c093a18adb4fd5340437a589493e439c3265b044b6717"} Feb 19 03:05:00.007990 master-0 kubenswrapper[7776]: I0219 03:05:00.007965 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:05:00.920226 master-0 kubenswrapper[7776]: I0219 03:05:00.919897 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:00.956277 master-0 kubenswrapper[7776]: I0219 03:05:00.955740 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:01.016155 master-0 kubenswrapper[7776]: I0219 03:05:01.011188 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:01.016155 master-0 kubenswrapper[7776]: I0219 03:05:01.011216 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:01.402090 master-0 kubenswrapper[7776]: I0219 03:05:01.401732 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:05:01.402090 master-0 kubenswrapper[7776]: I0219 03:05:01.401908 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:01.407336 master-0 kubenswrapper[7776]: I0219 03:05:01.407296 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:05:01.445718 master-0 kubenswrapper[7776]: I0219 03:05:01.445677 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv"] Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: E0219 03:05:01.445831 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: I0219 03:05:01.445849 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: E0219 03:05:01.445869 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: I0219 03:05:01.445876 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: I0219 03:05:01.445937 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:05:01.446004 master-0 kubenswrapper[7776]: I0219 03:05:01.445948 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7240e7-9923-4485-a055-0e1364954af9" containerName="prober" Feb 19 03:05:01.446234 master-0 kubenswrapper[7776]: I0219 03:05:01.446185 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.448025 master-0 kubenswrapper[7776]: I0219 03:05:01.448006 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:05:01.448723 master-0 kubenswrapper[7776]: I0219 03:05:01.448706 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:05:01.449086 master-0 kubenswrapper[7776]: I0219 03:05:01.449069 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:01.449356 master-0 kubenswrapper[7776]: I0219 03:05:01.449329 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:01.450766 master-0 kubenswrapper[7776]: I0219 03:05:01.450730 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:05:01.460348 master-0 kubenswrapper[7776]: I0219 03:05:01.457836 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv"] Feb 19 03:05:01.562245 master-0 kubenswrapper[7776]: I0219 03:05:01.562120 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7444dc796b-xwpkc"] Feb 19 03:05:01.562639 master-0 kubenswrapper[7776]: I0219 03:05:01.562613 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.564781 master-0 kubenswrapper[7776]: I0219 03:05:01.564749 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:05:01.565030 master-0 kubenswrapper[7776]: I0219 03:05:01.564997 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:05:01.565087 master-0 kubenswrapper[7776]: I0219 03:05:01.565055 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:05:01.565241 master-0 kubenswrapper[7776]: I0219 03:05:01.565216 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:01.565385 master-0 kubenswrapper[7776]: I0219 03:05:01.565369 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:01.572159 master-0 kubenswrapper[7776]: I0219 03:05:01.572121 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.572159 master-0 kubenswrapper[7776]: I0219 03:05:01.572162 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572193 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572322 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr86w\" (UniqueName: \"kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572512 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk788\" (UniqueName: \"kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572567 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572603 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572639 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.572911 master-0 kubenswrapper[7776]: I0219 03:05:01.572670 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.574049 master-0 kubenswrapper[7776]: I0219 03:05:01.574027 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:05:01.574885 master-0 kubenswrapper[7776]: I0219 03:05:01.574219 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7444dc796b-xwpkc"] Feb 19 03:05:01.673127 master-0 kubenswrapper[7776]: I0219 03:05:01.673001 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.673409 master-0 kubenswrapper[7776]: I0219 03:05:01.673392 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.673559 master-0 kubenswrapper[7776]: I0219 03:05:01.673544 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.673648 master-0 kubenswrapper[7776]: I0219 03:05:01.673633 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.673744 master-0 kubenswrapper[7776]: I0219 03:05:01.673730 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.673840 master-0 kubenswrapper[7776]: I0219 03:05:01.673823 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr86w\" (UniqueName: \"kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.673980 master-0 kubenswrapper[7776]: I0219 03:05:01.673965 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.674061 master-0 kubenswrapper[7776]: I0219 03:05:01.674048 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk788\" (UniqueName: \"kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.674163 master-0 kubenswrapper[7776]: I0219 03:05:01.674149 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.674265 master-0 kubenswrapper[7776]: I0219 03:05:01.674197 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:01.674366 master-0 kubenswrapper[7776]: E0219 03:05:01.673202 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:01.674512 master-0 kubenswrapper[7776]: E0219 03:05:01.674498 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.174477451 +0000 UTC m=+8.514161969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:01.674912 master-0 kubenswrapper[7776]: E0219 03:05:01.674239 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:01.675011 master-0 kubenswrapper[7776]: I0219 03:05:01.674966 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.675052 master-0 kubenswrapper[7776]: E0219 03:05:01.674993 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.174982855 +0000 UTC m=+8.514667373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : configmap "client-ca" not found Feb 19 03:05:01.675104 master-0 kubenswrapper[7776]: E0219 03:05:01.674314 7776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:01.675176 master-0 kubenswrapper[7776]: E0219 03:05:01.675167 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.1751587 +0000 UTC m=+8.514843218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : secret "serving-cert" not found Feb 19 03:05:01.675237 master-0 kubenswrapper[7776]: E0219 03:05:01.674346 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:01.675362 master-0 kubenswrapper[7776]: E0219 03:05:01.675326 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:02.175315514 +0000 UTC m=+8.515000122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : secret "serving-cert" not found Feb 19 03:05:01.676076 master-0 kubenswrapper[7776]: I0219 03:05:01.675833 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.703381 master-0 kubenswrapper[7776]: I0219 03:05:01.703348 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk788\" (UniqueName: \"kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:01.703638 master-0 kubenswrapper[7776]: I0219 03:05:01.703377 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr86w\" (UniqueName: \"kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:02.023888 master-0 kubenswrapper[7776]: I0219 03:05:02.023816 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:02.024831 master-0 kubenswrapper[7776]: I0219 03:05:02.024198 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"7cf42ee60fa4397f21a2d208681ed170f135d22ae88345ec4aa86dba915a0cc1"} Feb 19 03:05:02.024831 master-0 kubenswrapper[7776]: I0219 03:05:02.024598 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:02.030493 master-0 kubenswrapper[7776]: I0219 03:05:02.029357 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:05:02.178211 master-0 kubenswrapper[7776]: I0219 03:05:02.178051 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:02.178474 master-0 kubenswrapper[7776]: E0219 03:05:02.178247 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:02.178474 master-0 kubenswrapper[7776]: E0219 03:05:02.178348 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:03.178327474 +0000 UTC m=+9.518011992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : configmap "client-ca" not found Feb 19 03:05:02.178474 master-0 kubenswrapper[7776]: I0219 03:05:02.178379 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: I0219 03:05:02.178487 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: I0219 03:05:02.178540 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: E0219 03:05:02.178564 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: E0219 03:05:02.178623 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:03.178607872 +0000 UTC m=+9.518292390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: E0219 03:05:02.178658 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:02.178694 master-0 kubenswrapper[7776]: E0219 03:05:02.178683 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:03.178675744 +0000 UTC m=+9.518360262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : secret "serving-cert" not found Feb 19 03:05:02.179092 master-0 kubenswrapper[7776]: E0219 03:05:02.178715 7776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:02.179092 master-0 kubenswrapper[7776]: E0219 03:05:02.178821 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:03.178797127 +0000 UTC m=+9.518481715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : secret "serving-cert" not found Feb 19 03:05:02.503640 master-0 kubenswrapper[7776]: I0219 03:05:02.503534 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-92gqk"] Feb 19 03:05:02.504397 master-0 kubenswrapper[7776]: I0219 03:05:02.504376 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.507349 master-0 kubenswrapper[7776]: I0219 03:05:02.507310 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 03:05:02.507580 master-0 kubenswrapper[7776]: I0219 03:05:02.507558 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 03:05:02.507665 master-0 kubenswrapper[7776]: I0219 03:05:02.507625 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 03:05:02.507779 master-0 kubenswrapper[7776]: I0219 03:05:02.507723 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 03:05:02.513806 master-0 kubenswrapper[7776]: I0219 03:05:02.513760 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-92gqk"] Feb 19 03:05:02.584075 master-0 kubenswrapper[7776]: I0219 03:05:02.583974 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: E0219 03:05:02.584136 7776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: I0219 03:05:02.584156 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: I0219 03:05:02.584202 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: E0219 03:05:02.584223 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls podName:a59746bb-7d76-4fd7-8323-5b92be63afb9 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584198329 +0000 UTC m=+16.923882847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-cfdqh" (UID: "a59746bb-7d76-4fd7-8323-5b92be63afb9") : secret "image-registry-operator-tls" not found Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: I0219 03:05:02.584294 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: E0219 03:05:02.584346 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: I0219 03:05:02.584352 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:02.584369 master-0 kubenswrapper[7776]: E0219 03:05:02.584384 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584373514 +0000 UTC m=+16.924058032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584381 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584405 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584412 7776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584467 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584427 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584476 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584450326 +0000 UTC m=+16.924134854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584521 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584552 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584578 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584595 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert podName:bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584578679 +0000 UTC m=+16.924263207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert") pod "cluster-version-operator-5cfd9759cf-dsxxt" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae") : secret "cluster-version-operator-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584620 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.58461247 +0000 UTC m=+16.924296998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584636 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584627071 +0000 UTC m=+16.924311599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584653 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584665 7776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584667 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584690 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584678832 +0000 UTC m=+16.924363470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584707 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls podName:67f4e002-26fb-41e3-abdb-f4928b6c561f nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584698673 +0000 UTC m=+16.924383341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls") pod "dns-operator-8c7d49845-jlnvw" (UID: "67f4e002-26fb-41e3-abdb-f4928b6c561f") : secret "metrics-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584757 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: I0219 03:05:02.584764 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584790 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584782495 +0000 UTC m=+16.924467023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "performance-addon-operator-webhook-cert" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584850 7776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584857 7776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584882 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls podName:9ff96ce8-6427-4a42-afa6-8b8bc778f094 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584872327 +0000 UTC m=+16.924556945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls") pod "ingress-operator-6569778c84-qcd49" (UID: "9ff96ce8-6427-4a42-afa6-8b8bc778f094") : secret "metrics-tls" not found Feb 19 03:05:02.584905 master-0 kubenswrapper[7776]: E0219 03:05:02.584898 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls podName:2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.584887918 +0000 UTC m=+16.924572436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-dcpwb" (UID: "2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5") : secret "node-tuning-operator-tls" not found Feb 19 03:05:02.677446 master-0 kubenswrapper[7776]: I0219 03:05:02.677399 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7444dc796b-xwpkc"] Feb 19 03:05:02.677706 master-0 kubenswrapper[7776]: E0219 03:05:02.677675 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" podUID="793b5a19-73fe-4f27-a2fd-b52d06ea4af8" Feb 19 03:05:02.686204 master-0 kubenswrapper[7776]: I0219 03:05:02.686153 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:05:02.686317 master-0 kubenswrapper[7776]: E0219 03:05:02.686271 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:05:02.686384 master-0 kubenswrapper[7776]: E0219 03:05:02.686348 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:05:02.686434 master-0 kubenswrapper[7776]: E0219 03:05:02.686353 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.686331808 +0000 UTC m=+17.026016336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:05:02.686434 master-0 kubenswrapper[7776]: E0219 03:05:02.686411 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:10.68639846 +0000 UTC m=+17.026082988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:05:02.686434 master-0 kubenswrapper[7776]: I0219 03:05:02.686274 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:05:02.686556 master-0 kubenswrapper[7776]: I0219 03:05:02.686476 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.686556 master-0 kubenswrapper[7776]: I0219 03:05:02.686534 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfcl\" (UniqueName: \"kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.686637 master-0 kubenswrapper[7776]: I0219 03:05:02.686607 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.788142 master-0 kubenswrapper[7776]: I0219 03:05:02.788008 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfcl\" (UniqueName: \"kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.788484 master-0 kubenswrapper[7776]: I0219 03:05:02.788452 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.788775 master-0 kubenswrapper[7776]: I0219 03:05:02.788760 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.789756 master-0 kubenswrapper[7776]: I0219 03:05:02.789723 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.792381 master-0 kubenswrapper[7776]: I0219 03:05:02.792336 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.812245 master-0 kubenswrapper[7776]: I0219 03:05:02.812199 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfcl\" (UniqueName: \"kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.819858 master-0 kubenswrapper[7776]: I0219 03:05:02.819789 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:05:02.975555 master-0 kubenswrapper[7776]: I0219 03:05:02.974112 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-92gqk"] Feb 19 03:05:03.030485 master-0 kubenswrapper[7776]: I0219 03:05:03.030421 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerStarted","Data":"61a11a661104fcf20e20292b60baae6791127267c4b1c5fced71911c81734966"} Feb 19 03:05:03.030485 master-0 kubenswrapper[7776]: I0219 03:05:03.030487 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:03.031302 master-0 kubenswrapper[7776]: I0219 03:05:03.030518 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:03.037299 master-0 kubenswrapper[7776]: I0219 03:05:03.037003 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:03.088757 master-0 kubenswrapper[7776]: I0219 03:05:03.088699 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:05:03.091529 master-0 kubenswrapper[7776]: I0219 03:05:03.091491 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config\") pod \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " Feb 19 03:05:03.091583 master-0 kubenswrapper[7776]: I0219 03:05:03.091567 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles\") pod \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " Feb 19 03:05:03.091619 master-0 kubenswrapper[7776]: I0219 03:05:03.091598 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk788\" (UniqueName: \"kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788\") pod \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " Feb 19 03:05:03.092708 master-0 kubenswrapper[7776]: I0219 03:05:03.092676 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config" (OuterVolumeSpecName: "config") pod "793b5a19-73fe-4f27-a2fd-b52d06ea4af8" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:03.093157 master-0 kubenswrapper[7776]: I0219 03:05:03.093100 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "793b5a19-73fe-4f27-a2fd-b52d06ea4af8" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:03.095047 master-0 kubenswrapper[7776]: I0219 03:05:03.094729 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:05:03.097312 master-0 kubenswrapper[7776]: I0219 03:05:03.097265 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788" (OuterVolumeSpecName: "kube-api-access-lk788") pod "793b5a19-73fe-4f27-a2fd-b52d06ea4af8" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8"). InnerVolumeSpecName "kube-api-access-lk788". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:03.193200 master-0 kubenswrapper[7776]: I0219 03:05:03.193103 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:03.193448 master-0 kubenswrapper[7776]: E0219 03:05:03.193248 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:03.193448 master-0 kubenswrapper[7776]: I0219 03:05:03.193402 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:03.193543 master-0 kubenswrapper[7776]: I0219 03:05:03.193466 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:03.193543 master-0 kubenswrapper[7776]: I0219 03:05:03.193515 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca\") pod \"controller-manager-7444dc796b-xwpkc\" (UID: \"793b5a19-73fe-4f27-a2fd-b52d06ea4af8\") " pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193636 7776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193670 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.193657774 +0000 UTC m=+11.533342292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : secret "serving-cert" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: I0219 03:05:03.193670 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193699 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193711 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.193686105 +0000 UTC m=+11.533370643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: I0219 03:05:03.193772 7776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193768 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193802 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.193782458 +0000 UTC m=+11.533467056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : configmap "client-ca" not found Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: I0219 03:05:03.193827 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk788\" (UniqueName: \"kubernetes.io/projected/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-kube-api-access-lk788\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:03.193843 master-0 kubenswrapper[7776]: E0219 03:05:03.193831 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert podName:793b5a19-73fe-4f27-a2fd-b52d06ea4af8 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.193821649 +0000 UTC m=+11.533506277 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert") pod "controller-manager-7444dc796b-xwpkc" (UID: "793b5a19-73fe-4f27-a2fd-b52d06ea4af8") : secret "serving-cert" not found Feb 19 03:05:03.661331 master-0 kubenswrapper[7776]: I0219 03:05:03.661215 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:03.661669 master-0 kubenswrapper[7776]: I0219 03:05:03.661415 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:03.661669 master-0 kubenswrapper[7776]: I0219 03:05:03.661440 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:03.691359 master-0 kubenswrapper[7776]: I0219 03:05:03.691299 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:04.036563 master-0 kubenswrapper[7776]: I0219 03:05:04.036400 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerStarted","Data":"f411fdec6c82335e157399725224c73768983b7340cb840fe930f78c4eff8997"} Feb 19 03:05:04.037144 master-0 kubenswrapper[7776]: I0219 03:05:04.036593 7776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:05:04.037404 master-0 kubenswrapper[7776]: I0219 03:05:04.037378 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7444dc796b-xwpkc" Feb 19 03:05:04.061482 master-0 kubenswrapper[7776]: I0219 03:05:04.059920 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" podStartSLOduration=2.059894985 podStartE2EDuration="2.059894985s" podCreationTimestamp="2026-02-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:04.058750374 +0000 UTC m=+10.398434892" watchObservedRunningTime="2026-02-19 03:05:04.059894985 +0000 UTC m=+10.399579503" Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.085273 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b45cc56c-ghkxs"] Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.086117 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.087846 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.088135 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.088315 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:05:04.089294 master-0 kubenswrapper[7776]: I0219 03:05:04.088500 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:05:04.094236 master-0 kubenswrapper[7776]: I0219 03:05:04.089835 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7444dc796b-xwpkc"] Feb 19 03:05:04.094236 master-0 kubenswrapper[7776]: I0219 03:05:04.090278 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:05:04.105283 master-0 kubenswrapper[7776]: I0219 03:05:04.099188 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:05:04.105283 master-0 kubenswrapper[7776]: I0219 03:05:04.100045 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7444dc796b-xwpkc"] Feb 19 03:05:04.105283 master-0 kubenswrapper[7776]: I0219 03:05:04.100833 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b45cc56c-ghkxs"] Feb 19 03:05:04.111038 master-0 kubenswrapper[7776]: I0219 03:05:04.108799 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.111038 master-0 kubenswrapper[7776]: I0219 03:05:04.108941 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zjl\" (UniqueName: \"kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.111038 master-0 kubenswrapper[7776]: I0219 03:05:04.109021 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.111038 master-0 kubenswrapper[7776]: I0219 03:05:04.109146 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.111038 master-0 kubenswrapper[7776]: I0219 03:05:04.109186 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.210519 master-0 kubenswrapper[7776]: I0219 03:05:04.210450 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.210519 master-0 kubenswrapper[7776]: I0219 03:05:04.210522 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.211752 master-0 kubenswrapper[7776]: I0219 03:05:04.211696 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.211909 master-0 kubenswrapper[7776]: I0219 03:05:04.211860 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.211999 master-0 kubenswrapper[7776]: I0219 03:05:04.211977 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zjl\" (UniqueName: \"kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.212120 master-0 kubenswrapper[7776]: I0219 03:05:04.212088 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.212283 master-0 kubenswrapper[7776]: I0219 03:05:04.212237 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:04.212283 master-0 kubenswrapper[7776]: I0219 03:05:04.212282 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/793b5a19-73fe-4f27-a2fd-b52d06ea4af8-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:04.212373 master-0 kubenswrapper[7776]: E0219 03:05:04.212338 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:04.212414 master-0 kubenswrapper[7776]: E0219 03:05:04.212389 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:04.712374264 +0000 UTC m=+11.052058802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:04.218389 master-0 kubenswrapper[7776]: E0219 03:05:04.214099 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:04.218389 master-0 kubenswrapper[7776]: E0219 03:05:04.214214 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:04.714179922 +0000 UTC m=+11.053864440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : secret "serving-cert" not found Feb 19 03:05:04.218389 master-0 kubenswrapper[7776]: I0219 03:05:04.215250 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.251352 master-0 kubenswrapper[7776]: I0219 03:05:04.246841 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zjl\" (UniqueName: \"kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.578504 master-0 kubenswrapper[7776]: I0219 03:05:04.578459 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:04.600952 master-0 kubenswrapper[7776]: I0219 03:05:04.600900 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:05:04.719668 master-0 kubenswrapper[7776]: I0219 03:05:04.719605 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.719965 master-0 kubenswrapper[7776]: I0219 03:05:04.719771 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:04.719965 master-0 kubenswrapper[7776]: E0219 03:05:04.719884 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:04.719965 master-0 kubenswrapper[7776]: E0219 03:05:04.719947 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.719927366 +0000 UTC m=+12.059611884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:04.720885 master-0 kubenswrapper[7776]: E0219 03:05:04.720362 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:04.720885 master-0 kubenswrapper[7776]: E0219 03:05:04.720397 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:05.720386778 +0000 UTC m=+12.060071296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : secret "serving-cert" not found Feb 19 03:05:05.227571 master-0 kubenswrapper[7776]: I0219 03:05:05.227476 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:05.228875 master-0 kubenswrapper[7776]: E0219 03:05:05.227735 7776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:05.228875 master-0 kubenswrapper[7776]: E0219 03:05:05.227850 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:09.227825527 +0000 UTC m=+15.567510055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : secret "serving-cert" not found Feb 19 03:05:05.228875 master-0 kubenswrapper[7776]: I0219 03:05:05.228116 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:05.228875 master-0 kubenswrapper[7776]: E0219 03:05:05.228246 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:05.228875 master-0 kubenswrapper[7776]: E0219 03:05:05.228336 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:09.22831376 +0000 UTC m=+15.567998278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:05.736171 master-0 kubenswrapper[7776]: I0219 03:05:05.736106 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:05.736405 master-0 kubenswrapper[7776]: I0219 03:05:05.736248 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:05.736535 master-0 kubenswrapper[7776]: E0219 03:05:05.736497 7776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:05.736649 master-0 kubenswrapper[7776]: E0219 03:05:05.736583 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:05.736708 master-0 kubenswrapper[7776]: E0219 03:05:05.736608 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:07.736582882 +0000 UTC m=+14.076267440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : secret "serving-cert" not found Feb 19 03:05:05.736766 master-0 kubenswrapper[7776]: E0219 03:05:05.736737 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:07.736705875 +0000 UTC m=+14.076390383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:05.847552 master-0 kubenswrapper[7776]: I0219 03:05:05.847494 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793b5a19-73fe-4f27-a2fd-b52d06ea4af8" path="/var/lib/kubelet/pods/793b5a19-73fe-4f27-a2fd-b52d06ea4af8/volumes" Feb 19 03:05:05.924459 master-0 kubenswrapper[7776]: I0219 03:05:05.924412 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:07.764138 master-0 kubenswrapper[7776]: I0219 03:05:07.763762 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:07.764872 master-0 kubenswrapper[7776]: I0219 03:05:07.764198 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:07.764872 master-0 kubenswrapper[7776]: E0219 03:05:07.763990 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:07.764872 master-0 kubenswrapper[7776]: E0219 03:05:07.764324 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:11.764301572 +0000 UTC m=+18.103986160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:07.769553 master-0 kubenswrapper[7776]: I0219 03:05:07.769498 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:08.053203 master-0 kubenswrapper[7776]: I0219 03:05:08.053084 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="7cf42ee60fa4397f21a2d208681ed170f135d22ae88345ec4aa86dba915a0cc1" exitCode=0 Feb 19 03:05:08.053203 master-0 kubenswrapper[7776]: I0219 03:05:08.053155 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"7cf42ee60fa4397f21a2d208681ed170f135d22ae88345ec4aa86dba915a0cc1"} Feb 19 03:05:08.053663 master-0 kubenswrapper[7776]: I0219 03:05:08.053636 7776 scope.go:117] "RemoveContainer" containerID="7cf42ee60fa4397f21a2d208681ed170f135d22ae88345ec4aa86dba915a0cc1" Feb 19 03:05:08.054377 master-0 kubenswrapper[7776]: I0219 03:05:08.054166 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerStarted","Data":"e01bf7d4c559915b2a5ff79bf9dc359fe2aeec2863993dd1c97dd95da4862d3c"} Feb 19 03:05:08.893436 master-0 kubenswrapper[7776]: I0219 03:05:08.893341 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:08.920159 master-0 kubenswrapper[7776]: I0219 03:05:08.920084 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:09.057948 master-0 kubenswrapper[7776]: I0219 03:05:09.057830 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"1cbe35c756f9160518273575bc2e58e01f81643b6820032d740b2e63916651c9"} Feb 19 03:05:09.059868 master-0 kubenswrapper[7776]: I0219 03:05:09.059821 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a"} Feb 19 03:05:09.060059 master-0 kubenswrapper[7776]: I0219 03:05:09.060013 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:09.291306 master-0 kubenswrapper[7776]: I0219 03:05:09.291097 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:09.291306 master-0 kubenswrapper[7776]: E0219 03:05:09.291283 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:09.291306 master-0 kubenswrapper[7776]: I0219 03:05:09.291309 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:09.291615 master-0 kubenswrapper[7776]: E0219 03:05:09.291347 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:17.291331385 +0000 UTC m=+23.631015893 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:09.291615 master-0 kubenswrapper[7776]: E0219 03:05:09.291434 7776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 19 03:05:09.291615 master-0 kubenswrapper[7776]: E0219 03:05:09.291493 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:17.291473729 +0000 UTC m=+23.631158277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : secret "serving-cert" not found Feb 19 03:05:10.245222 master-0 kubenswrapper[7776]: I0219 03:05:10.245168 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g"] Feb 19 03:05:10.245892 master-0 kubenswrapper[7776]: I0219 03:05:10.245839 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:05:10.247351 master-0 kubenswrapper[7776]: I0219 03:05:10.247309 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 03:05:10.247351 master-0 kubenswrapper[7776]: I0219 03:05:10.247345 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 03:05:10.258164 master-0 kubenswrapper[7776]: I0219 03:05:10.257902 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g"] Feb 19 03:05:10.304705 master-0 kubenswrapper[7776]: I0219 03:05:10.304638 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm2l\" (UniqueName: \"kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l\") pod \"migrator-5c85bff57-85d6g\" (UID: \"c4ed0c32-c13f-4c72-b83f-9af19b2950a3\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:05:10.405513 master-0 kubenswrapper[7776]: I0219 03:05:10.405460 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm2l\" (UniqueName: \"kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l\") pod \"migrator-5c85bff57-85d6g\" (UID: \"c4ed0c32-c13f-4c72-b83f-9af19b2950a3\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:05:10.434234 master-0 kubenswrapper[7776]: I0219 03:05:10.434158 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm2l\" (UniqueName: \"kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l\") pod \"migrator-5c85bff57-85d6g\" (UID: \"c4ed0c32-c13f-4c72-b83f-9af19b2950a3\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:05:10.565486 master-0 kubenswrapper[7776]: I0219 03:05:10.565352 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:05:10.608460 master-0 kubenswrapper[7776]: I0219 03:05:10.608050 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:10.608460 master-0 kubenswrapper[7776]: I0219 03:05:10.608463 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:05:10.608785 master-0 kubenswrapper[7776]: I0219 03:05:10.608502 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:05:10.608785 master-0 kubenswrapper[7776]: E0219 03:05:10.608642 7776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 19 03:05:10.608785 master-0 kubenswrapper[7776]: E0219 03:05:10.608699 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls podName:80c48134-cb22-4cf9-b076-ce39af2f4113 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.608682315 +0000 UTC m=+32.948366853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-2vmxq" (UID: "80c48134-cb22-4cf9-b076-ce39af2f4113") : secret "cluster-monitoring-operator-tls" not found Feb 19 03:05:10.609163 master-0 kubenswrapper[7776]: I0219 03:05:10.609119 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:10.609301 master-0 kubenswrapper[7776]: I0219 03:05:10.609171 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:05:10.609301 master-0 kubenswrapper[7776]: I0219 03:05:10.609198 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:05:10.609301 master-0 kubenswrapper[7776]: I0219 03:05:10.609226 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:10.609301 master-0 kubenswrapper[7776]: I0219 03:05:10.609280 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:10.609644 master-0 kubenswrapper[7776]: I0219 03:05:10.609319 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:10.609644 master-0 kubenswrapper[7776]: I0219 03:05:10.609454 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:10.609644 master-0 kubenswrapper[7776]: I0219 03:05:10.609502 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:10.609835 master-0 kubenswrapper[7776]: E0219 03:05:10.609681 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 19 03:05:10.609835 master-0 kubenswrapper[7776]: E0219 03:05:10.609738 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert podName:c50a2aec-7ed0-4114-8b25-19579fe931cb nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.609721852 +0000 UTC m=+32.949406380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert") pod "catalog-operator-596f79dd6f-sbzsk" (UID: "c50a2aec-7ed0-4114-8b25-19579fe931cb") : secret "catalog-operator-serving-cert" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.609870 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.609952 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert podName:98ac5423-b231-44e5-9545-424d635ed6ee nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.609897897 +0000 UTC m=+32.949582435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tbg8" (UID: "98ac5423-b231-44e5-9545-424d635ed6ee") : secret "package-server-manager-serving-cert" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.610009 7776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.610035 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics podName:58c6f5a2-c0a8-4636-a057-cedbe0151579 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.610026691 +0000 UTC m=+32.949711219 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-xxdh5" (UID: "58c6f5a2-c0a8-4636-a057-cedbe0151579") : secret "marketplace-operator-metrics" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.610051 7776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 19 03:05:10.611413 master-0 kubenswrapper[7776]: E0219 03:05:10.610147 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert podName:b283bd8e-3339-4701-ae3c-f009e498b7d4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.610122313 +0000 UTC m=+32.949806831 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert") pod "olm-operator-5499d7f7bb-kk77t" (UID: "b283bd8e-3339-4701-ae3c-f009e498b7d4") : secret "olm-operator-serving-cert" not found Feb 19 03:05:10.613395 master-0 kubenswrapper[7776]: I0219 03:05:10.613327 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:10.615902 master-0 kubenswrapper[7776]: I0219 03:05:10.614144 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:05:10.615902 master-0 kubenswrapper[7776]: I0219 03:05:10.615350 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-dsxxt\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:10.615902 master-0 kubenswrapper[7776]: I0219 03:05:10.615596 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:05:10.615902 master-0 kubenswrapper[7776]: I0219 03:05:10.615879 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:05:10.619127 master-0 kubenswrapper[7776]: I0219 03:05:10.619042 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:10.697931 master-0 kubenswrapper[7776]: I0219 03:05:10.697861 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:05:10.700478 master-0 kubenswrapper[7776]: I0219 03:05:10.700446 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:05:10.700478 master-0 kubenswrapper[7776]: I0219 03:05:10.700466 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:05:10.707983 master-0 kubenswrapper[7776]: I0219 03:05:10.707919 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:10.710154 master-0 kubenswrapper[7776]: I0219 03:05:10.710091 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:05:10.710154 master-0 kubenswrapper[7776]: I0219 03:05:10.710136 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:05:10.710424 master-0 kubenswrapper[7776]: E0219 03:05:10.710288 7776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 19 03:05:10.710516 master-0 kubenswrapper[7776]: E0219 03:05:10.710488 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs podName:947faa21-7f67-4c7e-abb0-443432f38961 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.710468364 +0000 UTC m=+33.050152892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-q8pfv" (UID: "947faa21-7f67-4c7e-abb0-443432f38961") : secret "multus-admission-controller-secret" not found Feb 19 03:05:10.710882 master-0 kubenswrapper[7776]: E0219 03:05:10.710842 7776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 19 03:05:10.710882 master-0 kubenswrapper[7776]: E0219 03:05:10.710874 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs podName:6ae2cbe0-aa0a-4f26-994b-660fb962d995 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.710866275 +0000 UTC m=+33.050550793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs") pod "network-metrics-daemon-hspwc" (UID: "6ae2cbe0-aa0a-4f26-994b-660fb962d995") : secret "metrics-daemon-secret" not found Feb 19 03:05:10.711093 master-0 kubenswrapper[7776]: I0219 03:05:10.710924 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:05:10.753880 master-0 kubenswrapper[7776]: I0219 03:05:10.753759 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g"] Feb 19 03:05:10.762062 master-0 kubenswrapper[7776]: W0219 03:05:10.761776 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ed0c32_c13f_4c72_b83f_9af19b2950a3.slice/crio-499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86 WatchSource:0}: Error finding container 499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86: Status 404 returned error can't find the container with id 499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86 Feb 19 03:05:10.941289 master-0 kubenswrapper[7776]: I0219 03:05:10.933299 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-qcd49"] Feb 19 03:05:10.973392 master-0 kubenswrapper[7776]: I0219 03:05:10.969003 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh"] Feb 19 03:05:10.978127 master-0 kubenswrapper[7776]: W0219 03:05:10.978072 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff96ce8_6427_4a42_afa6_8b8bc778f094.slice/crio-9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef WatchSource:0}: Error finding container 9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef: Status 404 returned error can't find the container with id 9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef Feb 19 03:05:10.983320 master-0 kubenswrapper[7776]: W0219 03:05:10.983242 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda59746bb_7d76_4fd7_8323_5b92be63afb9.slice/crio-da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870 WatchSource:0}: Error finding container da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870: Status 404 returned error can't find the container with id da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870 Feb 19 03:05:10.992803 master-0 kubenswrapper[7776]: I0219 03:05:10.992769 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-jlnvw"] Feb 19 03:05:11.000408 master-0 kubenswrapper[7776]: W0219 03:05:11.000327 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f4e002_26fb_41e3_abdb_f4928b6c561f.slice/crio-8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98 WatchSource:0}: Error finding container 8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98: Status 404 returned error can't find the container with id 8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98 Feb 19 03:05:11.009506 master-0 kubenswrapper[7776]: I0219 03:05:11.009462 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb"] Feb 19 03:05:11.015994 master-0 kubenswrapper[7776]: W0219 03:05:11.015951 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acaff2d_b9d0_4ed5_9de3_48029eaa8ce5.slice/crio-98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344 WatchSource:0}: Error finding container 98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344: Status 404 returned error can't find the container with id 98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344 Feb 19 03:05:11.075054 master-0 kubenswrapper[7776]: I0219 03:05:11.074986 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef"} Feb 19 03:05:11.076177 master-0 kubenswrapper[7776]: I0219 03:05:11.076088 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" event={"ID":"c4ed0c32-c13f-4c72-b83f-9af19b2950a3","Type":"ContainerStarted","Data":"499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86"} Feb 19 03:05:11.076935 master-0 kubenswrapper[7776]: I0219 03:05:11.076901 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" event={"ID":"67f4e002-26fb-41e3-abdb-f4928b6c561f","Type":"ContainerStarted","Data":"8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98"} Feb 19 03:05:11.077713 master-0 kubenswrapper[7776]: I0219 03:05:11.077690 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" event={"ID":"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae","Type":"ContainerStarted","Data":"63768e6c1bd9c6cb1c52062b8b293c9b2621a3ba99ae016ced4ba8c856a3dbff"} Feb 19 03:05:11.084078 master-0 kubenswrapper[7776]: I0219 03:05:11.084009 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" event={"ID":"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5","Type":"ContainerStarted","Data":"98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344"} Feb 19 03:05:11.085171 master-0 kubenswrapper[7776]: I0219 03:05:11.085110 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerStarted","Data":"da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870"} Feb 19 03:05:11.831570 master-0 kubenswrapper[7776]: I0219 03:05:11.831206 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:11.832510 master-0 kubenswrapper[7776]: E0219 03:05:11.831399 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:11.832510 master-0 kubenswrapper[7776]: E0219 03:05:11.831693 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:19.831672843 +0000 UTC m=+26.171357361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:11.923564 master-0 kubenswrapper[7776]: I0219 03:05:11.923503 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:05:14.097470 master-0 kubenswrapper[7776]: I0219 03:05:14.097128 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" event={"ID":"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2","Type":"ContainerStarted","Data":"ea3fbe70d15235f707a7c57be5fd384739f1296cedb5a5f878d80b5d8be3b136"} Feb 19 03:05:14.099976 master-0 kubenswrapper[7776]: I0219 03:05:14.099926 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" event={"ID":"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651","Type":"ContainerStarted","Data":"9bdce3951fee565e17f2d28d3fa9bab8451b2a0d85b9fde5d5703fd5c2bc6773"} Feb 19 03:05:14.101672 master-0 kubenswrapper[7776]: I0219 03:05:14.101649 7776 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="ad34f3a66db7717f06a16858a5fed120d78982f25b57db7cc0d0805ee1a11f34" exitCode=0 Feb 19 03:05:14.101756 master-0 kubenswrapper[7776]: I0219 03:05:14.101691 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerDied","Data":"ad34f3a66db7717f06a16858a5fed120d78982f25b57db7cc0d0805ee1a11f34"} Feb 19 03:05:14.104610 master-0 kubenswrapper[7776]: I0219 03:05:14.104224 7776 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="1cbe35c756f9160518273575bc2e58e01f81643b6820032d740b2e63916651c9" exitCode=0 Feb 19 03:05:14.104610 master-0 kubenswrapper[7776]: I0219 03:05:14.104283 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerDied","Data":"1cbe35c756f9160518273575bc2e58e01f81643b6820032d740b2e63916651c9"} Feb 19 03:05:14.104730 master-0 kubenswrapper[7776]: I0219 03:05:14.104690 7776 scope.go:117] "RemoveContainer" containerID="1cbe35c756f9160518273575bc2e58e01f81643b6820032d740b2e63916651c9" Feb 19 03:05:14.794349 master-0 kubenswrapper[7776]: I0219 03:05:14.792352 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd"] Feb 19 03:05:14.794349 master-0 kubenswrapper[7776]: I0219 03:05:14.793027 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:05:14.808301 master-0 kubenswrapper[7776]: I0219 03:05:14.806095 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd"] Feb 19 03:05:14.889015 master-0 kubenswrapper[7776]: I0219 03:05:14.888965 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvxh\" (UniqueName: \"kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh\") pod \"csi-snapshot-controller-6847bb4785-6trsd\" (UID: \"c8f325fb-0075-4a18-ba7e-669ab19bc91a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:05:14.990579 master-0 kubenswrapper[7776]: I0219 03:05:14.990531 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvxh\" (UniqueName: \"kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh\") pod \"csi-snapshot-controller-6847bb4785-6trsd\" (UID: \"c8f325fb-0075-4a18-ba7e-669ab19bc91a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:05:15.014411 master-0 kubenswrapper[7776]: I0219 03:05:15.014360 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvxh\" (UniqueName: \"kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh\") pod \"csi-snapshot-controller-6847bb4785-6trsd\" (UID: \"c8f325fb-0075-4a18-ba7e-669ab19bc91a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:05:15.109896 master-0 kubenswrapper[7776]: I0219 03:05:15.109795 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" event={"ID":"c4ed0c32-c13f-4c72-b83f-9af19b2950a3","Type":"ContainerStarted","Data":"2f6cb48aff1435d9f43ef6c2c4bbe5bf0acd116aa21954611807f3226862ca5c"} Feb 19 03:05:15.114562 master-0 kubenswrapper[7776]: I0219 03:05:15.114518 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795"} Feb 19 03:05:15.168237 master-0 kubenswrapper[7776]: I0219 03:05:15.168172 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:05:17.124763 master-0 kubenswrapper[7776]: I0219 03:05:17.124355 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-kvvll" event={"ID":"decd8c56-e0f0-4119-917f-56652c8f8372","Type":"ContainerStarted","Data":"3d58300e9d2d7a15fa7c2d9fc6a45afecdb7f2732fd3a7a30683cd8a4e68a4a6"} Feb 19 03:05:17.321930 master-0 kubenswrapper[7776]: I0219 03:05:17.320947 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:17.321930 master-0 kubenswrapper[7776]: E0219 03:05:17.321117 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:17.321930 master-0 kubenswrapper[7776]: I0219 03:05:17.321194 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:17.321930 master-0 kubenswrapper[7776]: E0219 03:05:17.321235 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca podName:550c53dc-6bb0-49af-adec-0fe197343434 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:33.321206565 +0000 UTC m=+39.660891123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca") pod "route-controller-manager-7867b8fb7b-r22wv" (UID: "550c53dc-6bb0-49af-adec-0fe197343434") : configmap "client-ca" not found Feb 19 03:05:17.329371 master-0 kubenswrapper[7776]: I0219 03:05:17.329335 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"route-controller-manager-7867b8fb7b-r22wv\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:19.856900 master-0 kubenswrapper[7776]: I0219 03:05:19.856834 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") pod \"controller-manager-66b45cc56c-ghkxs\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:19.857641 master-0 kubenswrapper[7776]: E0219 03:05:19.856972 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:19.857641 master-0 kubenswrapper[7776]: E0219 03:05:19.857062 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca podName:4f812767-d78d-494a-a167-ca7de3af6a0b nodeName:}" failed. No retries permitted until 2026-02-19 03:05:35.857037832 +0000 UTC m=+42.196722400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca") pod "controller-manager-66b45cc56c-ghkxs" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b") : configmap "client-ca" not found Feb 19 03:05:20.591423 master-0 kubenswrapper[7776]: I0219 03:05:20.584166 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:05:20.591423 master-0 kubenswrapper[7776]: I0219 03:05:20.585293 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.600279 master-0 kubenswrapper[7776]: I0219 03:05:20.600218 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 19 03:05:20.667549 master-0 kubenswrapper[7776]: I0219 03:05:20.667458 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.667549 master-0 kubenswrapper[7776]: I0219 03:05:20.667550 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.667960 master-0 kubenswrapper[7776]: I0219 03:05:20.667608 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.769367 master-0 kubenswrapper[7776]: I0219 03:05:20.769221 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.769635 master-0 kubenswrapper[7776]: I0219 03:05:20.769467 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.769781 master-0 kubenswrapper[7776]: I0219 03:05:20.769699 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.769912 master-0 kubenswrapper[7776]: I0219 03:05:20.769867 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:20.770131 master-0 kubenswrapper[7776]: I0219 03:05:20.770071 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:21.137967 master-0 kubenswrapper[7776]: I0219 03:05:21.137880 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:05:21.185183 master-0 kubenswrapper[7776]: I0219 03:05:21.185125 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:21.217904 master-0 kubenswrapper[7776]: I0219 03:05:21.217811 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:05:21.392028 master-0 kubenswrapper[7776]: I0219 03:05:21.391901 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-546884889b-hv7vs"] Feb 19 03:05:21.392737 master-0 kubenswrapper[7776]: I0219 03:05:21.392709 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.403056 master-0 kubenswrapper[7776]: W0219 03:05:21.403003 7776 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403152 master-0 kubenswrapper[7776]: E0219 03:05:21.403072 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.403152 master-0 kubenswrapper[7776]: W0219 03:05:21.403106 7776 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403152 master-0 kubenswrapper[7776]: E0219 03:05:21.403117 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.403152 master-0 kubenswrapper[7776]: W0219 03:05:21.403143 7776 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: E0219 03:05:21.403153 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: W0219 03:05:21.403183 7776 reflector.go:561] object-"openshift-apiserver"/"audit-0": failed to list *v1.ConfigMap: configmaps "audit-0" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: E0219 03:05:21.403193 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-0\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: W0219 03:05:21.403221 7776 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: E0219 03:05:21.403231 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: W0219 03:05:21.403275 7776 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.403309 master-0 kubenswrapper[7776]: E0219 03:05:21.403287 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.407784 master-0 kubenswrapper[7776]: W0219 03:05:21.407738 7776 reflector.go:561] object-"openshift-apiserver"/"encryption-config-0": failed to list *v1.Secret: secrets "encryption-config-0" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.407854 master-0 kubenswrapper[7776]: E0219 03:05:21.407800 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-0\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.407893 master-0 kubenswrapper[7776]: W0219 03:05:21.407856 7776 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.407893 master-0 kubenswrapper[7776]: E0219 03:05:21.407871 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.407951 master-0 kubenswrapper[7776]: W0219 03:05:21.407908 7776 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.407951 master-0 kubenswrapper[7776]: E0219 03:05:21.407922 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.408008 master-0 kubenswrapper[7776]: W0219 03:05:21.407959 7776 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-0' and this object Feb 19 03:05:21.408008 master-0 kubenswrapper[7776]: E0219 03:05:21.407973 7776 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 19 03:05:21.534209 master-0 kubenswrapper[7776]: I0219 03:05:21.533917 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-546884889b-hv7vs"] Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578474 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578539 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578569 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578589 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578610 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578669 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578692 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578722 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578801 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578822 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.578894 master-0 kubenswrapper[7776]: I0219 03:05:21.578844 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680008 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680050 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680066 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680096 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680117 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680146 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680159 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680175 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680203 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680224 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680242 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680460 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.691499 master-0 kubenswrapper[7776]: I0219 03:05:21.680520 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:21.856688 master-0 kubenswrapper[7776]: I0219 03:05:21.856635 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b45cc56c-ghkxs"] Feb 19 03:05:21.856993 master-0 kubenswrapper[7776]: E0219 03:05:21.856962 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" podUID="4f812767-d78d-494a-a167-ca7de3af6a0b" Feb 19 03:05:21.880240 master-0 kubenswrapper[7776]: I0219 03:05:21.879060 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv"] Feb 19 03:05:21.880240 master-0 kubenswrapper[7776]: E0219 03:05:21.879340 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" podUID="550c53dc-6bb0-49af-adec-0fe197343434" Feb 19 03:05:22.145378 master-0 kubenswrapper[7776]: I0219 03:05:22.143519 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:22.145378 master-0 kubenswrapper[7776]: I0219 03:05:22.143571 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:22.167233 master-0 kubenswrapper[7776]: I0219 03:05:22.167179 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:22.171154 master-0 kubenswrapper[7776]: I0219 03:05:22.171122 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:22.277156 master-0 kubenswrapper[7776]: I0219 03:05:22.276801 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 19 03:05:22.281127 master-0 kubenswrapper[7776]: E0219 03:05:22.281091 7776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 19 03:05:22.281245 master-0 kubenswrapper[7776]: E0219 03:05:22.281170 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:22.781146823 +0000 UTC m=+29.120831341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : configmap "audit-0" not found Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285298 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles\") pod \"4f812767-d78d-494a-a167-ca7de3af6a0b\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285332 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config\") pod \"550c53dc-6bb0-49af-adec-0fe197343434\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285359 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zjl\" (UniqueName: \"kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl\") pod \"4f812767-d78d-494a-a167-ca7de3af6a0b\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285392 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr86w\" (UniqueName: \"kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w\") pod \"550c53dc-6bb0-49af-adec-0fe197343434\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285414 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config\") pod \"4f812767-d78d-494a-a167-ca7de3af6a0b\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285459 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") pod \"550c53dc-6bb0-49af-adec-0fe197343434\" (UID: \"550c53dc-6bb0-49af-adec-0fe197343434\") " Feb 19 03:05:22.285557 master-0 kubenswrapper[7776]: I0219 03:05:22.285501 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") pod \"4f812767-d78d-494a-a167-ca7de3af6a0b\" (UID: \"4f812767-d78d-494a-a167-ca7de3af6a0b\") " Feb 19 03:05:22.286968 master-0 kubenswrapper[7776]: I0219 03:05:22.286806 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4f812767-d78d-494a-a167-ca7de3af6a0b" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:22.286968 master-0 kubenswrapper[7776]: I0219 03:05:22.286827 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config" (OuterVolumeSpecName: "config") pod "550c53dc-6bb0-49af-adec-0fe197343434" (UID: "550c53dc-6bb0-49af-adec-0fe197343434"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:22.287431 master-0 kubenswrapper[7776]: I0219 03:05:22.287325 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config" (OuterVolumeSpecName: "config") pod "4f812767-d78d-494a-a167-ca7de3af6a0b" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:22.289045 master-0 kubenswrapper[7776]: I0219 03:05:22.289003 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f812767-d78d-494a-a167-ca7de3af6a0b" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:22.289329 master-0 kubenswrapper[7776]: I0219 03:05:22.289132 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl" (OuterVolumeSpecName: "kube-api-access-57zjl") pod "4f812767-d78d-494a-a167-ca7de3af6a0b" (UID: "4f812767-d78d-494a-a167-ca7de3af6a0b"). InnerVolumeSpecName "kube-api-access-57zjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:22.289329 master-0 kubenswrapper[7776]: I0219 03:05:22.289246 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "550c53dc-6bb0-49af-adec-0fe197343434" (UID: "550c53dc-6bb0-49af-adec-0fe197343434"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:22.289878 master-0 kubenswrapper[7776]: I0219 03:05:22.289793 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w" (OuterVolumeSpecName: "kube-api-access-hr86w") pod "550c53dc-6bb0-49af-adec-0fe197343434" (UID: "550c53dc-6bb0-49af-adec-0fe197343434"). InnerVolumeSpecName "kube-api-access-hr86w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:22.334977 master-0 kubenswrapper[7776]: I0219 03:05:22.334919 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 03:05:22.344630 master-0 kubenswrapper[7776]: I0219 03:05:22.344271 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.386936 master-0 kubenswrapper[7776]: I0219 03:05:22.386886 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f812767-d78d-494a-a167-ca7de3af6a0b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.386936 master-0 kubenswrapper[7776]: I0219 03:05:22.386925 7776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.386936 master-0 kubenswrapper[7776]: I0219 03:05:22.386939 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.387204 master-0 kubenswrapper[7776]: I0219 03:05:22.386952 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zjl\" (UniqueName: \"kubernetes.io/projected/4f812767-d78d-494a-a167-ca7de3af6a0b-kube-api-access-57zjl\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.387204 master-0 kubenswrapper[7776]: I0219 03:05:22.386966 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr86w\" (UniqueName: \"kubernetes.io/projected/550c53dc-6bb0-49af-adec-0fe197343434-kube-api-access-hr86w\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.387204 master-0 kubenswrapper[7776]: I0219 03:05:22.386977 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.387204 master-0 kubenswrapper[7776]: I0219 03:05:22.386988 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550c53dc-6bb0-49af-adec-0fe197343434-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:22.495654 master-0 kubenswrapper[7776]: I0219 03:05:22.495484 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 19 03:05:22.506907 master-0 kubenswrapper[7776]: I0219 03:05:22.506858 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.559822 master-0 kubenswrapper[7776]: I0219 03:05:22.559766 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 03:05:22.562112 master-0 kubenswrapper[7776]: I0219 03:05:22.562077 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.576489 master-0 kubenswrapper[7776]: I0219 03:05:22.576451 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 03:05:22.662722 master-0 kubenswrapper[7776]: I0219 03:05:22.662658 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 03:05:22.664946 master-0 kubenswrapper[7776]: I0219 03:05:22.664898 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 03:05:22.666291 master-0 kubenswrapper[7776]: I0219 03:05:22.666269 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 03:05:22.672137 master-0 kubenswrapper[7776]: I0219 03:05:22.672099 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.672203 master-0 kubenswrapper[7776]: I0219 03:05:22.672154 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.672399 master-0 kubenswrapper[7776]: I0219 03:05:22.672366 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.680945 master-0 kubenswrapper[7776]: E0219 03:05:22.680906 7776 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 19 03:05:22.681013 master-0 kubenswrapper[7776]: E0219 03:05:22.680983 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:23.180963115 +0000 UTC m=+29.520647733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:05:22.766099 master-0 kubenswrapper[7776]: I0219 03:05:22.765992 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 03:05:22.792391 master-0 kubenswrapper[7776]: I0219 03:05:22.792337 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:22.792593 master-0 kubenswrapper[7776]: E0219 03:05:22.792492 7776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 19 03:05:22.792631 master-0 kubenswrapper[7776]: E0219 03:05:22.792594 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:23.792571179 +0000 UTC m=+30.132255797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : configmap "audit-0" not found Feb 19 03:05:22.804036 master-0 kubenswrapper[7776]: E0219 03:05:22.803982 7776 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:05:22.804036 master-0 kubenswrapper[7776]: E0219 03:05:22.804047 7776 projected.go:194] Error preparing data for projected volume kube-api-access-blcjh for pod openshift-apiserver/apiserver-546884889b-hv7vs: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:05:22.804241 master-0 kubenswrapper[7776]: E0219 03:05:22.804125 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:23.304104108 +0000 UTC m=+29.643788726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-blcjh" (UniqueName: "kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:05:22.881109 master-0 kubenswrapper[7776]: I0219 03:05:22.881032 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 03:05:23.151205 master-0 kubenswrapper[7776]: I0219 03:05:23.151149 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b45cc56c-ghkxs" Feb 19 03:05:23.151695 master-0 kubenswrapper[7776]: I0219 03:05:23.151180 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv" Feb 19 03:05:23.195572 master-0 kubenswrapper[7776]: I0219 03:05:23.195533 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv"] Feb 19 03:05:23.197070 master-0 kubenswrapper[7776]: I0219 03:05:23.196927 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m"] Feb 19 03:05:23.197350 master-0 kubenswrapper[7776]: I0219 03:05:23.197326 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:23.197739 master-0 kubenswrapper[7776]: I0219 03:05:23.197649 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.198153 master-0 kubenswrapper[7776]: I0219 03:05:23.197942 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv"] Feb 19 03:05:23.199460 master-0 kubenswrapper[7776]: I0219 03:05:23.199195 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:05:23.199839 master-0 kubenswrapper[7776]: I0219 03:05:23.199814 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:05:23.199914 master-0 kubenswrapper[7776]: I0219 03:05:23.199865 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:05:23.199976 master-0 kubenswrapper[7776]: I0219 03:05:23.199957 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:23.199976 master-0 kubenswrapper[7776]: I0219 03:05:23.199813 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:23.204691 master-0 kubenswrapper[7776]: I0219 03:05:23.204649 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m"] Feb 19 03:05:23.211191 master-0 kubenswrapper[7776]: I0219 03:05:23.211121 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:23.226410 master-0 kubenswrapper[7776]: I0219 03:05:23.226367 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b45cc56c-ghkxs"] Feb 19 03:05:23.230895 master-0 kubenswrapper[7776]: I0219 03:05:23.230859 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 19 03:05:23.234165 master-0 kubenswrapper[7776]: I0219 03:05:23.231294 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.234165 master-0 kubenswrapper[7776]: I0219 03:05:23.232959 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 19 03:05:23.235708 master-0 kubenswrapper[7776]: I0219 03:05:23.235681 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b45cc56c-ghkxs"] Feb 19 03:05:23.238513 master-0 kubenswrapper[7776]: I0219 03:05:23.238485 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 19 03:05:23.298834 master-0 kubenswrapper[7776]: I0219 03:05:23.298795 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmc7d\" (UniqueName: \"kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.299014 master-0 kubenswrapper[7776]: I0219 03:05:23.298853 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.299014 master-0 kubenswrapper[7776]: I0219 03:05:23.298905 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.299014 master-0 kubenswrapper[7776]: I0219 03:05:23.298931 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.299014 master-0 kubenswrapper[7776]: I0219 03:05:23.298987 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550c53dc-6bb0-49af-adec-0fe197343434-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:23.399976 master-0 kubenswrapper[7776]: I0219 03:05:23.399929 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmc7d\" (UniqueName: \"kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.399976 master-0 kubenswrapper[7776]: I0219 03:05:23.399986 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.400417 master-0 kubenswrapper[7776]: I0219 03:05:23.400378 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.400417 master-0 kubenswrapper[7776]: I0219 03:05:23.400406 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.400537 master-0 kubenswrapper[7776]: I0219 03:05:23.400430 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.400537 master-0 kubenswrapper[7776]: I0219 03:05:23.400453 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.400537 master-0 kubenswrapper[7776]: I0219 03:05:23.400484 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:23.400537 master-0 kubenswrapper[7776]: I0219 03:05:23.400507 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.400537 master-0 kubenswrapper[7776]: I0219 03:05:23.400532 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f812767-d78d-494a-a167-ca7de3af6a0b-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:23.400694 master-0 kubenswrapper[7776]: E0219 03:05:23.400581 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:23.401352 master-0 kubenswrapper[7776]: E0219 03:05:23.400785 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:23.90076928 +0000 UTC m=+30.240453798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:23.401566 master-0 kubenswrapper[7776]: I0219 03:05:23.401545 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.403871 master-0 kubenswrapper[7776]: I0219 03:05:23.403855 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.403992 master-0 kubenswrapper[7776]: I0219 03:05:23.403879 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:23.422341 master-0 kubenswrapper[7776]: I0219 03:05:23.422280 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmc7d\" (UniqueName: \"kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.501289 master-0 kubenswrapper[7776]: I0219 03:05:23.501202 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.501502 master-0 kubenswrapper[7776]: I0219 03:05:23.501345 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.501502 master-0 kubenswrapper[7776]: I0219 03:05:23.501435 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.501694 master-0 kubenswrapper[7776]: I0219 03:05:23.501661 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.501749 master-0 kubenswrapper[7776]: I0219 03:05:23.501704 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.518918 master-0 kubenswrapper[7776]: I0219 03:05:23.518868 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.560555 master-0 kubenswrapper[7776]: I0219 03:05:23.560488 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 19 03:05:23.810356 master-0 kubenswrapper[7776]: I0219 03:05:23.807353 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:23.810356 master-0 kubenswrapper[7776]: E0219 03:05:23.807487 7776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 19 03:05:23.810356 master-0 kubenswrapper[7776]: E0219 03:05:23.807542 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:25.807526059 +0000 UTC m=+32.147210577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : configmap "audit-0" not found Feb 19 03:05:23.846155 master-0 kubenswrapper[7776]: I0219 03:05:23.846102 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f812767-d78d-494a-a167-ca7de3af6a0b" path="/var/lib/kubelet/pods/4f812767-d78d-494a-a167-ca7de3af6a0b/volumes" Feb 19 03:05:23.846516 master-0 kubenswrapper[7776]: I0219 03:05:23.846485 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550c53dc-6bb0-49af-adec-0fe197343434" path="/var/lib/kubelet/pods/550c53dc-6bb0-49af-adec-0fe197343434/volumes" Feb 19 03:05:23.909015 master-0 kubenswrapper[7776]: I0219 03:05:23.908966 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:23.909236 master-0 kubenswrapper[7776]: E0219 03:05:23.909114 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:23.909236 master-0 kubenswrapper[7776]: E0219 03:05:23.909180 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:24.909160564 +0000 UTC m=+31.248845082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:24.834165 master-0 kubenswrapper[7776]: I0219 03:05:24.833786 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-546884889b-hv7vs"] Feb 19 03:05:24.834910 master-0 kubenswrapper[7776]: E0219 03:05:24.834623 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-546884889b-hv7vs" podUID="309ccdea-4eb5-4fcd-957f-1fb992fdef25" Feb 19 03:05:24.923240 master-0 kubenswrapper[7776]: I0219 03:05:24.923184 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:24.923467 master-0 kubenswrapper[7776]: E0219 03:05:24.923334 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:24.923467 master-0 kubenswrapper[7776]: E0219 03:05:24.923405 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.923385695 +0000 UTC m=+33.263070213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:25.174483 master-0 kubenswrapper[7776]: I0219 03:05:25.173002 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:25.195321 master-0 kubenswrapper[7776]: I0219 03:05:25.195277 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:25.217658 master-0 kubenswrapper[7776]: I0219 03:05:25.217615 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:05:25.231784 master-0 kubenswrapper[7776]: W0219 03:05:25.231488 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaba1213d_8a7d_4b99_857f_b66578cc2bec.slice/crio-1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647 WatchSource:0}: Error finding container 1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647: Status 404 returned error can't find the container with id 1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647 Feb 19 03:05:25.247999 master-0 kubenswrapper[7776]: I0219 03:05:25.247955 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 19 03:05:25.264208 master-0 kubenswrapper[7776]: W0219 03:05:25.263589 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2561caa0_5f79_496e_8fa7_a9692dca20be.slice/crio-d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692 WatchSource:0}: Error finding container d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692: Status 404 returned error can't find the container with id d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692 Feb 19 03:05:25.332805 master-0 kubenswrapper[7776]: I0219 03:05:25.332761 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.332921 master-0 kubenswrapper[7776]: I0219 03:05:25.332817 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.332921 master-0 kubenswrapper[7776]: I0219 03:05:25.332848 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.332921 master-0 kubenswrapper[7776]: I0219 03:05:25.332871 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.332921 master-0 kubenswrapper[7776]: I0219 03:05:25.332919 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333044 master-0 kubenswrapper[7776]: I0219 03:05:25.332937 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333044 master-0 kubenswrapper[7776]: I0219 03:05:25.332959 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333044 master-0 kubenswrapper[7776]: I0219 03:05:25.332983 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333044 master-0 kubenswrapper[7776]: I0219 03:05:25.333006 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333044 master-0 kubenswrapper[7776]: I0219 03:05:25.333043 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config\") pod \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " Feb 19 03:05:25.333420 master-0 kubenswrapper[7776]: I0219 03:05:25.333288 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:25.334172 master-0 kubenswrapper[7776]: I0219 03:05:25.333715 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:25.334172 master-0 kubenswrapper[7776]: I0219 03:05:25.333755 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config" (OuterVolumeSpecName: "config") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:25.334172 master-0 kubenswrapper[7776]: I0219 03:05:25.333777 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:25.334172 master-0 kubenswrapper[7776]: I0219 03:05:25.333935 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:25.336564 master-0 kubenswrapper[7776]: I0219 03:05:25.334935 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:25.343471 master-0 kubenswrapper[7776]: I0219 03:05:25.340492 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:25.343471 master-0 kubenswrapper[7776]: I0219 03:05:25.340522 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:25.343471 master-0 kubenswrapper[7776]: I0219 03:05:25.340507 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh" (OuterVolumeSpecName: "kube-api-access-blcjh") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "kube-api-access-blcjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:25.345656 master-0 kubenswrapper[7776]: I0219 03:05:25.345580 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "309ccdea-4eb5-4fcd-957f-1fb992fdef25" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:25.434149 master-0 kubenswrapper[7776]: I0219 03:05:25.433972 7776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434149 master-0 kubenswrapper[7776]: I0219 03:05:25.434002 7776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434149 master-0 kubenswrapper[7776]: I0219 03:05:25.434014 7776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434312 master-0 kubenswrapper[7776]: I0219 03:05:25.434024 7776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434312 master-0 kubenswrapper[7776]: I0219 03:05:25.434184 7776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434366 master-0 kubenswrapper[7776]: I0219 03:05:25.434248 7776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434366 master-0 kubenswrapper[7776]: I0219 03:05:25.434333 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blcjh\" (UniqueName: \"kubernetes.io/projected/309ccdea-4eb5-4fcd-957f-1fb992fdef25-kube-api-access-blcjh\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434417 master-0 kubenswrapper[7776]: I0219 03:05:25.434345 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/309ccdea-4eb5-4fcd-957f-1fb992fdef25-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434417 master-0 kubenswrapper[7776]: I0219 03:05:25.434386 7776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.434417 master-0 kubenswrapper[7776]: I0219 03:05:25.434398 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:25.439213 master-0 kubenswrapper[7776]: I0219 03:05:25.438690 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd"] Feb 19 03:05:25.498077 master-0 kubenswrapper[7776]: I0219 03:05:25.497511 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-4jl4c"] Feb 19 03:05:25.500597 master-0 kubenswrapper[7776]: I0219 03:05:25.499435 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.606173 master-0 kubenswrapper[7776]: I0219 03:05:25.604050 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-767fdf786d-rhhcr"] Feb 19 03:05:25.606173 master-0 kubenswrapper[7776]: I0219 03:05:25.604751 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.607707 master-0 kubenswrapper[7776]: I0219 03:05:25.606631 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:05:25.607707 master-0 kubenswrapper[7776]: I0219 03:05:25.606880 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:05:25.607828 master-0 kubenswrapper[7776]: I0219 03:05:25.607758 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:25.607896 master-0 kubenswrapper[7776]: I0219 03:05:25.607865 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:25.608116 master-0 kubenswrapper[7776]: I0219 03:05:25.608077 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:05:25.613555 master-0 kubenswrapper[7776]: I0219 03:05:25.613485 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767fdf786d-rhhcr"] Feb 19 03:05:25.616735 master-0 kubenswrapper[7776]: I0219 03:05:25.615875 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641539 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641647 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641708 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641733 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwp5\" (UniqueName: \"kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641801 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641826 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641870 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641895 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641931 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641948 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.641970 master-0 kubenswrapper[7776]: I0219 03:05:25.641984 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.642568 master-0 kubenswrapper[7776]: I0219 03:05:25.642002 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.642568 master-0 kubenswrapper[7776]: I0219 03:05:25.642024 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.642568 master-0 kubenswrapper[7776]: I0219 03:05:25.642044 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742588 master-0 kubenswrapper[7776]: I0219 03:05:25.742525 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742588 master-0 kubenswrapper[7776]: I0219 03:05:25.742575 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742588 master-0 kubenswrapper[7776]: I0219 03:05:25.742599 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742619 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742656 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742681 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742701 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742721 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742742 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742790 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742816 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7sdk\" (UniqueName: \"kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.742891 master-0 kubenswrapper[7776]: I0219 03:05:25.742839 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwp5\" (UniqueName: \"kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.743235 master-0 kubenswrapper[7776]: I0219 03:05:25.742901 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.743235 master-0 kubenswrapper[7776]: I0219 03:05:25.742930 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.743235 master-0 kubenswrapper[7776]: I0219 03:05:25.743107 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743348 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743495 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743556 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743636 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743760 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743791 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743788 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743817 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743863 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.743906 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.744016 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.744047 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.744164 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.744298 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.744355 master-0 kubenswrapper[7776]: I0219 03:05:25.744326 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.748712 master-0 kubenswrapper[7776]: I0219 03:05:25.748227 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.748712 master-0 kubenswrapper[7776]: I0219 03:05:25.748656 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.763436 master-0 kubenswrapper[7776]: I0219 03:05:25.763350 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwp5\" (UniqueName: \"kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.844563 master-0 kubenswrapper[7776]: I0219 03:05:25.844440 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.844563 master-0 kubenswrapper[7776]: I0219 03:05:25.844495 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.844563 master-0 kubenswrapper[7776]: I0219 03:05:25.844521 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.844563 master-0 kubenswrapper[7776]: I0219 03:05:25.844567 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7sdk\" (UniqueName: \"kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: I0219 03:05:25.844658 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: I0219 03:05:25.844683 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") pod \"apiserver-546884889b-hv7vs\" (UID: \"309ccdea-4eb5-4fcd-957f-1fb992fdef25\") " pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: E0219 03:05:25.844837 7776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: E0219 03:05:25.844900 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit podName:309ccdea-4eb5-4fcd-957f-1fb992fdef25 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:29.844882097 +0000 UTC m=+36.184566615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit") pod "apiserver-546884889b-hv7vs" (UID: "309ccdea-4eb5-4fcd-957f-1fb992fdef25") : configmap "audit-0" not found Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: E0219 03:05:25.845421 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:25.845708 master-0 kubenswrapper[7776]: E0219 03:05:25.845459 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca podName:9bc23f57-1547-4351-a918-c0de8db211f4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.345448712 +0000 UTC m=+32.685133230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca") pod "controller-manager-767fdf786d-rhhcr" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4") : configmap "client-ca" not found Feb 19 03:05:25.846508 master-0 kubenswrapper[7776]: I0219 03:05:25.846473 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.846862 master-0 kubenswrapper[7776]: I0219 03:05:25.846828 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.848660 master-0 kubenswrapper[7776]: I0219 03:05:25.848299 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:05:25.848801 master-0 kubenswrapper[7776]: I0219 03:05:25.848731 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.869885 master-0 kubenswrapper[7776]: I0219 03:05:25.868628 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7sdk\" (UniqueName: \"kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:25.869885 master-0 kubenswrapper[7776]: W0219 03:05:25.868938 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78702d1c_b5ab_4e00_92da_cb2513a72024.slice/crio-20d3855d38e34ada920a94808cad883dc5067cbbdc0504ac33f3632a296c9e89 WatchSource:0}: Error finding container 20d3855d38e34ada920a94808cad883dc5067cbbdc0504ac33f3632a296c9e89: Status 404 returned error can't find the container with id 20d3855d38e34ada920a94808cad883dc5067cbbdc0504ac33f3632a296c9e89 Feb 19 03:05:26.011448 master-0 kubenswrapper[7776]: I0219 03:05:26.010282 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-clndn"] Feb 19 03:05:26.011448 master-0 kubenswrapper[7776]: I0219 03:05:26.011102 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.013672 master-0 kubenswrapper[7776]: I0219 03:05:26.013491 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 03:05:26.013843 master-0 kubenswrapper[7776]: I0219 03:05:26.013818 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 03:05:26.014468 master-0 kubenswrapper[7776]: I0219 03:05:26.014334 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 03:05:26.014875 master-0 kubenswrapper[7776]: I0219 03:05:26.014847 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 03:05:26.024774 master-0 kubenswrapper[7776]: I0219 03:05:26.022115 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-clndn"] Feb 19 03:05:26.150919 master-0 kubenswrapper[7776]: I0219 03:05:26.150723 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.150919 master-0 kubenswrapper[7776]: I0219 03:05:26.150773 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.150919 master-0 kubenswrapper[7776]: I0219 03:05:26.150848 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz72\" (UniqueName: \"kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.178823 master-0 kubenswrapper[7776]: I0219 03:05:26.178756 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerStarted","Data":"757e9a0ca78b5c9be8e7d397d2406ec6f854bb73586e71bec0887198a2e450f2"} Feb 19 03:05:26.179955 master-0 kubenswrapper[7776]: I0219 03:05:26.179918 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" event={"ID":"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5","Type":"ContainerStarted","Data":"df34220d8bbf9f2c919dd6d16618c4c0582bf76fef0068e3cc67cfd63cba32a9"} Feb 19 03:05:26.181971 master-0 kubenswrapper[7776]: I0219 03:05:26.181458 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"37b14f21eea6ae068c6ab319848a3075fde8aacf4bdcecd0e6ca1c48ebc11e9a"} Feb 19 03:05:26.185703 master-0 kubenswrapper[7776]: I0219 03:05:26.185657 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2561caa0-5f79-496e-8fa7-a9692dca20be","Type":"ContainerStarted","Data":"32be5e8b93330dd04d423a1444137191a10ffbf90c7167cd6baa0a0571479517"} Feb 19 03:05:26.185703 master-0 kubenswrapper[7776]: I0219 03:05:26.185699 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2561caa0-5f79-496e-8fa7-a9692dca20be","Type":"ContainerStarted","Data":"d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692"} Feb 19 03:05:26.188372 master-0 kubenswrapper[7776]: I0219 03:05:26.188309 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" event={"ID":"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae","Type":"ContainerStarted","Data":"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0"} Feb 19 03:05:26.192003 master-0 kubenswrapper[7776]: I0219 03:05:26.191957 7776 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="0e04df6594fd15b397e2045ad7c4f04fede6b3d68bd63913e230a0f01929b6ec" exitCode=0 Feb 19 03:05:26.192133 master-0 kubenswrapper[7776]: I0219 03:05:26.192044 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerDied","Data":"0e04df6594fd15b397e2045ad7c4f04fede6b3d68bd63913e230a0f01929b6ec"} Feb 19 03:05:26.195363 master-0 kubenswrapper[7776]: I0219 03:05:26.195320 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"a224ce5061b47ffc9528e375f91b4111ff8e66e3009eb1eba7e9319a13736164"} Feb 19 03:05:26.195408 master-0 kubenswrapper[7776]: I0219 03:05:26.195375 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"48896fb51d13a46ede8e9679a55d5198adfa5eeb4a91ae305507c9b4bf39a65b"} Feb 19 03:05:26.205785 master-0 kubenswrapper[7776]: I0219 03:05:26.205742 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" event={"ID":"78702d1c-b5ab-4e00-92da-cb2513a72024","Type":"ContainerStarted","Data":"b954e4b0bcc6520a165df3784fabd767976b8fba88d9569ede15fc3b6ec1488a"} Feb 19 03:05:26.205891 master-0 kubenswrapper[7776]: I0219 03:05:26.205817 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" event={"ID":"78702d1c-b5ab-4e00-92da-cb2513a72024","Type":"ContainerStarted","Data":"20d3855d38e34ada920a94808cad883dc5067cbbdc0504ac33f3632a296c9e89"} Feb 19 03:05:26.211403 master-0 kubenswrapper[7776]: I0219 03:05:26.211364 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"aba1213d-8a7d-4b99-857f-b66578cc2bec","Type":"ContainerStarted","Data":"107af6c10e19bdb483e86e7f412dc740d6234ce2a56a37c6f92ca7b36c798080"} Feb 19 03:05:26.211403 master-0 kubenswrapper[7776]: I0219 03:05:26.211397 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"aba1213d-8a7d-4b99-857f-b66578cc2bec","Type":"ContainerStarted","Data":"1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647"} Feb 19 03:05:26.213997 master-0 kubenswrapper[7776]: I0219 03:05:26.213957 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" event={"ID":"c4ed0c32-c13f-4c72-b83f-9af19b2950a3","Type":"ContainerStarted","Data":"0246584e27bc3e37b3abcf34220145290038b872f46071bff23efe3b68b73897"} Feb 19 03:05:26.215793 master-0 kubenswrapper[7776]: I0219 03:05:26.215756 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-546884889b-hv7vs" Feb 19 03:05:26.216320 master-0 kubenswrapper[7776]: I0219 03:05:26.216276 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" event={"ID":"67f4e002-26fb-41e3-abdb-f4928b6c561f","Type":"ContainerStarted","Data":"25b5d28b97d8d34f26d4092f77c3be0b5102e5ffaf9e7b3ef70f9f9511cdd4d7"} Feb 19 03:05:26.216392 master-0 kubenswrapper[7776]: I0219 03:05:26.216335 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" event={"ID":"67f4e002-26fb-41e3-abdb-f4928b6c561f","Type":"ContainerStarted","Data":"6fbf3a194474b2240ccf55034690f4c16d3f45d01747fe77a9b82da7f898a733"} Feb 19 03:05:26.253390 master-0 kubenswrapper[7776]: I0219 03:05:26.253237 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" podStartSLOduration=12.194404176 podStartE2EDuration="16.253214568s" podCreationTimestamp="2026-02-19 03:05:10 +0000 UTC" firstStartedPulling="2026-02-19 03:05:10.766001733 +0000 UTC m=+17.105686251" lastFinishedPulling="2026-02-19 03:05:14.824812125 +0000 UTC m=+21.164496643" observedRunningTime="2026-02-19 03:05:26.251578424 +0000 UTC m=+32.591262942" watchObservedRunningTime="2026-02-19 03:05:26.253214568 +0000 UTC m=+32.592899086" Feb 19 03:05:26.256414 master-0 kubenswrapper[7776]: I0219 03:05:26.256333 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz72\" (UniqueName: \"kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.257496 master-0 kubenswrapper[7776]: I0219 03:05:26.257457 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.257496 master-0 kubenswrapper[7776]: I0219 03:05:26.257497 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.258172 master-0 kubenswrapper[7776]: E0219 03:05:26.258132 7776 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 19 03:05:26.258263 master-0 kubenswrapper[7776]: E0219 03:05:26.258201 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls podName:75c58162-a0ba-40f4-8894-38f17dc2fb6d nodeName:}" failed. No retries permitted until 2026-02-19 03:05:26.758183181 +0000 UTC m=+33.097867889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls") pod "dns-default-clndn" (UID: "75c58162-a0ba-40f4-8894-38f17dc2fb6d") : secret "dns-default-metrics-tls" not found Feb 19 03:05:26.258593 master-0 kubenswrapper[7776]: I0219 03:05:26.258562 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.315977 master-0 kubenswrapper[7776]: I0219 03:05:26.315911 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz72\" (UniqueName: \"kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.328794 master-0 kubenswrapper[7776]: I0219 03:05:26.328629 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=3.32860391 podStartE2EDuration="3.32860391s" podCreationTimestamp="2026-02-19 03:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:26.298330698 +0000 UTC m=+32.638015236" watchObservedRunningTime="2026-02-19 03:05:26.32860391 +0000 UTC m=+32.668288428" Feb 19 03:05:26.362426 master-0 kubenswrapper[7776]: I0219 03:05:26.360062 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:26.362426 master-0 kubenswrapper[7776]: E0219 03:05:26.361531 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:26.362426 master-0 kubenswrapper[7776]: E0219 03:05:26.361585 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca podName:9bc23f57-1547-4351-a918-c0de8db211f4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:27.361567584 +0000 UTC m=+33.701252102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca") pod "controller-manager-767fdf786d-rhhcr" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4") : configmap "client-ca" not found Feb 19 03:05:26.408143 master-0 kubenswrapper[7776]: I0219 03:05:26.407672 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=6.40765191 podStartE2EDuration="6.40765191s" podCreationTimestamp="2026-02-19 03:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:26.406218522 +0000 UTC m=+32.745903040" watchObservedRunningTime="2026-02-19 03:05:26.40765191 +0000 UTC m=+32.747336428" Feb 19 03:05:26.436157 master-0 kubenswrapper[7776]: I0219 03:05:26.436069 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" podStartSLOduration=1.436051342 podStartE2EDuration="1.436051342s" podCreationTimestamp="2026-02-19 03:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:26.435830296 +0000 UTC m=+32.775514824" watchObservedRunningTime="2026-02-19 03:05:26.436051342 +0000 UTC m=+32.775735860" Feb 19 03:05:26.505071 master-0 kubenswrapper[7776]: I0219 03:05:26.505036 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4qvfn"] Feb 19 03:05:26.508690 master-0 kubenswrapper[7776]: I0219 03:05:26.508656 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.562337 master-0 kubenswrapper[7776]: I0219 03:05:26.561821 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msl9t\" (UniqueName: \"kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.562337 master-0 kubenswrapper[7776]: I0219 03:05:26.561937 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.566151 master-0 kubenswrapper[7776]: I0219 03:05:26.566081 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-957b9456f-f5s8c"] Feb 19 03:05:26.567269 master-0 kubenswrapper[7776]: I0219 03:05:26.567221 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-546884889b-hv7vs"] Feb 19 03:05:26.567416 master-0 kubenswrapper[7776]: I0219 03:05:26.567393 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.569457 master-0 kubenswrapper[7776]: I0219 03:05:26.569413 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 03:05:26.573142 master-0 kubenswrapper[7776]: I0219 03:05:26.573108 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 03:05:26.573332 master-0 kubenswrapper[7776]: I0219 03:05:26.573287 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 03:05:26.573463 master-0 kubenswrapper[7776]: I0219 03:05:26.573433 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 03:05:26.573595 master-0 kubenswrapper[7776]: I0219 03:05:26.573556 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 03:05:26.573899 master-0 kubenswrapper[7776]: I0219 03:05:26.573866 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 03:05:26.574119 master-0 kubenswrapper[7776]: I0219 03:05:26.574090 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 03:05:26.574196 master-0 kubenswrapper[7776]: I0219 03:05:26.574167 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 03:05:26.574196 master-0 kubenswrapper[7776]: I0219 03:05:26.574175 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 03:05:26.584637 master-0 kubenswrapper[7776]: I0219 03:05:26.583219 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-957b9456f-f5s8c"] Feb 19 03:05:26.591672 master-0 kubenswrapper[7776]: I0219 03:05:26.591548 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 03:05:26.604421 master-0 kubenswrapper[7776]: I0219 03:05:26.604348 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-546884889b-hv7vs"] Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663244 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663333 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663371 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663399 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663423 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663445 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663465 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663493 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663521 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663555 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663603 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663627 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msl9t\" (UniqueName: \"kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663661 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663689 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663716 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663738 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.663769 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.665336 master-0 kubenswrapper[7776]: I0219 03:05:26.665011 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.670895 master-0 kubenswrapper[7776]: I0219 03:05:26.669673 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894cz\" (UniqueName: \"kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.670895 master-0 kubenswrapper[7776]: I0219 03:05:26.669923 7776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/309ccdea-4eb5-4fcd-957f-1fb992fdef25-audit\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:26.687277 master-0 kubenswrapper[7776]: I0219 03:05:26.686355 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:26.687277 master-0 kubenswrapper[7776]: I0219 03:05:26.686470 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:26.687277 master-0 kubenswrapper[7776]: I0219 03:05:26.686531 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:05:26.687277 master-0 kubenswrapper[7776]: I0219 03:05:26.686587 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:26.691274 master-0 kubenswrapper[7776]: I0219 03:05:26.688891 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:26.691274 master-0 kubenswrapper[7776]: I0219 03:05:26.689326 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msl9t\" (UniqueName: \"kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.771354 master-0 kubenswrapper[7776]: I0219 03:05:26.771267 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:05:26.771354 master-0 kubenswrapper[7776]: I0219 03:05:26.771357 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771378 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771397 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771413 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771435 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771459 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771484 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894cz\" (UniqueName: \"kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771512 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771529 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771547 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771566 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.771593 master-0 kubenswrapper[7776]: I0219 03:05:26.771584 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.772157 master-0 kubenswrapper[7776]: I0219 03:05:26.771610 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.772507 master-0 kubenswrapper[7776]: I0219 03:05:26.772474 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.773201 master-0 kubenswrapper[7776]: I0219 03:05:26.773162 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.773414 master-0 kubenswrapper[7776]: I0219 03:05:26.773382 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.773876 master-0 kubenswrapper[7776]: I0219 03:05:26.773843 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.773953 master-0 kubenswrapper[7776]: I0219 03:05:26.773840 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.774155 master-0 kubenswrapper[7776]: I0219 03:05:26.774105 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.775629 master-0 kubenswrapper[7776]: I0219 03:05:26.775603 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:05:26.775822 master-0 kubenswrapper[7776]: I0219 03:05:26.775792 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.776351 master-0 kubenswrapper[7776]: I0219 03:05:26.776323 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.777153 master-0 kubenswrapper[7776]: I0219 03:05:26.777127 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.778263 master-0 kubenswrapper[7776]: I0219 03:05:26.778212 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.778994 master-0 kubenswrapper[7776]: I0219 03:05:26.778951 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-q8pfv\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:05:26.779090 master-0 kubenswrapper[7776]: I0219 03:05:26.779054 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.795939 master-0 kubenswrapper[7776]: I0219 03:05:26.795681 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894cz\" (UniqueName: \"kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.859285 master-0 kubenswrapper[7776]: I0219 03:05:26.856122 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:05:26.873979 master-0 kubenswrapper[7776]: W0219 03:05:26.873927 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67624ad2_babb_4b0e_9599_99325c286b22.slice/crio-d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7 WatchSource:0}: Error finding container d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7: Status 404 returned error can't find the container with id d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7 Feb 19 03:05:26.897941 master-0 kubenswrapper[7776]: I0219 03:05:26.897657 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:26.900093 master-0 kubenswrapper[7776]: I0219 03:05:26.900050 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:26.901140 master-0 kubenswrapper[7776]: I0219 03:05:26.901104 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:05:26.908432 master-0 kubenswrapper[7776]: I0219 03:05:26.908385 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:05:26.909357 master-0 kubenswrapper[7776]: I0219 03:05:26.909322 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:26.909628 master-0 kubenswrapper[7776]: I0219 03:05:26.909479 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:05:26.910476 master-0 kubenswrapper[7776]: I0219 03:05:26.910450 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:26.911553 master-0 kubenswrapper[7776]: I0219 03:05:26.911522 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:26.933336 master-0 kubenswrapper[7776]: I0219 03:05:26.931467 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-clndn" Feb 19 03:05:26.974495 master-0 kubenswrapper[7776]: I0219 03:05:26.974311 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:26.974495 master-0 kubenswrapper[7776]: E0219 03:05:26.974447 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:26.974718 master-0 kubenswrapper[7776]: E0219 03:05:26.974526 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:30.974507282 +0000 UTC m=+37.314191800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:27.160142 master-0 kubenswrapper[7776]: I0219 03:05:27.158382 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk"] Feb 19 03:05:27.224418 master-0 kubenswrapper[7776]: I0219 03:05:27.224069 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4qvfn" event={"ID":"67624ad2-babb-4b0e-9599-99325c286b22","Type":"ContainerStarted","Data":"d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7"} Feb 19 03:05:27.231136 master-0 kubenswrapper[7776]: I0219 03:05:27.229653 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" event={"ID":"c50a2aec-7ed0-4114-8b25-19579fe931cb","Type":"ContainerStarted","Data":"eba23b843b06a31c02fbe2e5edf93d18b7d3dc9682c0e2415a4ef18d5dc94d9a"} Feb 19 03:05:27.241181 master-0 kubenswrapper[7776]: I0219 03:05:27.241108 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq"] Feb 19 03:05:27.265223 master-0 kubenswrapper[7776]: I0219 03:05:27.264797 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-957b9456f-f5s8c"] Feb 19 03:05:27.387071 master-0 kubenswrapper[7776]: I0219 03:05:27.387002 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:27.387289 master-0 kubenswrapper[7776]: E0219 03:05:27.387164 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:27.387289 master-0 kubenswrapper[7776]: E0219 03:05:27.387219 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca podName:9bc23f57-1547-4351-a918-c0de8db211f4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:29.38720281 +0000 UTC m=+35.726887338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca") pod "controller-manager-767fdf786d-rhhcr" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4") : configmap "client-ca" not found Feb 19 03:05:27.532054 master-0 kubenswrapper[7776]: I0219 03:05:27.531977 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hspwc"] Feb 19 03:05:27.565171 master-0 kubenswrapper[7776]: I0219 03:05:27.565117 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t"] Feb 19 03:05:27.667969 master-0 kubenswrapper[7776]: I0219 03:05:27.664516 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8"] Feb 19 03:05:27.749625 master-0 kubenswrapper[7776]: I0219 03:05:27.745688 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-clndn"] Feb 19 03:05:27.749625 master-0 kubenswrapper[7776]: I0219 03:05:27.745737 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-xxdh5"] Feb 19 03:05:27.796263 master-0 kubenswrapper[7776]: I0219 03:05:27.796196 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:05:27.847895 master-0 kubenswrapper[7776]: I0219 03:05:27.847847 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309ccdea-4eb5-4fcd-957f-1fb992fdef25" path="/var/lib/kubelet/pods/309ccdea-4eb5-4fcd-957f-1fb992fdef25/volumes" Feb 19 03:05:28.106153 master-0 kubenswrapper[7776]: W0219 03:05:28.104967 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc569676a_51dd_418c_87a5_719c18fe4c95.slice/crio-b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068 WatchSource:0}: Error finding container b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068: Status 404 returned error can't find the container with id b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068 Feb 19 03:05:28.233135 master-0 kubenswrapper[7776]: I0219 03:05:28.232907 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-clndn" event={"ID":"75c58162-a0ba-40f4-8894-38f17dc2fb6d","Type":"ContainerStarted","Data":"5e2c5960bcaff754ff10d5f0bd77876e25896beaba961d7afb484f9be25cfe20"} Feb 19 03:05:28.234025 master-0 kubenswrapper[7776]: I0219 03:05:28.233996 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4qvfn" event={"ID":"67624ad2-babb-4b0e-9599-99325c286b22","Type":"ContainerStarted","Data":"5356b8b2d0652ae62e5f44b1d8aa47c347362803079f6fdede215f745397de5e"} Feb 19 03:05:28.245014 master-0 kubenswrapper[7776]: I0219 03:05:28.244933 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" event={"ID":"c569676a-51dd-418c-87a5-719c18fe4c95","Type":"ContainerStarted","Data":"b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068"} Feb 19 03:05:28.246146 master-0 kubenswrapper[7776]: I0219 03:05:28.246094 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"a97067053251ed5fdadac8ab4f77e00bdc2868f3bbfa6100d974d3529e1d0acb"} Feb 19 03:05:28.247129 master-0 kubenswrapper[7776]: I0219 03:05:28.247083 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerStarted","Data":"92da4e2c41faed23ae9536b6cf450fa8714135f86f0f23ad77b009821e031601"} Feb 19 03:05:28.248251 master-0 kubenswrapper[7776]: I0219 03:05:28.248201 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hspwc" event={"ID":"6ae2cbe0-aa0a-4f26-994b-660fb962d995","Type":"ContainerStarted","Data":"1760667bc1ae6e6c0373f38881f9d459051273b2be065a4f5aefaa03ffb1434b"} Feb 19 03:05:28.249395 master-0 kubenswrapper[7776]: I0219 03:05:28.249352 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"544bd972dc91af9025a1eea69f42f5c5c42aa6d851bb5566dd4ab554ab92d7e1"} Feb 19 03:05:28.250470 master-0 kubenswrapper[7776]: I0219 03:05:28.250409 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" event={"ID":"80c48134-cb22-4cf9-b076-ce39af2f4113","Type":"ContainerStarted","Data":"5f264243f9d37a0085ae08d6a429bf7d068aa6d2f402d16789c1248a2996b55b"} Feb 19 03:05:28.252078 master-0 kubenswrapper[7776]: I0219 03:05:28.251986 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" event={"ID":"b283bd8e-3339-4701-ae3c-f009e498b7d4","Type":"ContainerStarted","Data":"489ce9d0a231fe744fe2609ac45c676f913cd59253cbd1654f71c13c5ab7ceef"} Feb 19 03:05:28.282482 master-0 kubenswrapper[7776]: I0219 03:05:28.282406 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4qvfn" podStartSLOduration=2.282385368 podStartE2EDuration="2.282385368s" podCreationTimestamp="2026-02-19 03:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:28.279433299 +0000 UTC m=+34.619117817" watchObservedRunningTime="2026-02-19 03:05:28.282385368 +0000 UTC m=+34.622069926" Feb 19 03:05:28.602091 master-0 kubenswrapper[7776]: I0219 03:05:28.602031 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:05:28.602345 master-0 kubenswrapper[7776]: I0219 03:05:28.602286 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" containerName="installer" containerID="cri-o://107af6c10e19bdb483e86e7f412dc740d6234ce2a56a37c6f92ca7b36c798080" gracePeriod=30 Feb 19 03:05:29.260629 master-0 kubenswrapper[7776]: I0219 03:05:29.260584 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb"} Feb 19 03:05:29.264501 master-0 kubenswrapper[7776]: I0219 03:05:29.264444 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"3f993ca8915a297c51b2fc6e7cff3a27c10dcc04c963feba0fcb6153f8ccb1bc"} Feb 19 03:05:29.304147 master-0 kubenswrapper[7776]: I0219 03:05:29.304030 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podStartSLOduration=12.519042576 podStartE2EDuration="15.304009626s" podCreationTimestamp="2026-02-19 03:05:14 +0000 UTC" firstStartedPulling="2026-02-19 03:05:25.468426581 +0000 UTC m=+31.808111099" lastFinishedPulling="2026-02-19 03:05:28.253393631 +0000 UTC m=+34.593078149" observedRunningTime="2026-02-19 03:05:29.30302628 +0000 UTC m=+35.642710818" watchObservedRunningTime="2026-02-19 03:05:29.304009626 +0000 UTC m=+35.643694154" Feb 19 03:05:29.411119 master-0 kubenswrapper[7776]: I0219 03:05:29.410975 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:29.411302 master-0 kubenswrapper[7776]: E0219 03:05:29.411149 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:29.411302 master-0 kubenswrapper[7776]: E0219 03:05:29.411220 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca podName:9bc23f57-1547-4351-a918-c0de8db211f4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:33.411202131 +0000 UTC m=+39.750886649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca") pod "controller-manager-767fdf786d-rhhcr" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4") : configmap "client-ca" not found Feb 19 03:05:29.429631 master-0 kubenswrapper[7776]: I0219 03:05:29.429530 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:05:30.290430 master-0 kubenswrapper[7776]: I0219 03:05:30.289313 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerStarted","Data":"b96163b548b39e7368771cc78a7cc93ce0deae1acb7e2556bf2a0d6f06a4eac4"} Feb 19 03:05:31.057039 master-0 kubenswrapper[7776]: I0219 03:05:31.056577 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:31.057039 master-0 kubenswrapper[7776]: E0219 03:05:31.056779 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:31.057039 master-0 kubenswrapper[7776]: E0219 03:05:31.056961 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:39.056924167 +0000 UTC m=+45.396608725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:31.231328 master-0 kubenswrapper[7776]: I0219 03:05:31.223236 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:31.231328 master-0 kubenswrapper[7776]: I0219 03:05:31.223774 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.268180 master-0 kubenswrapper[7776]: I0219 03:05:31.259186 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.268180 master-0 kubenswrapper[7776]: I0219 03:05:31.259267 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.268180 master-0 kubenswrapper[7776]: I0219 03:05:31.259301 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.319343 master-0 kubenswrapper[7776]: I0219 03:05:31.314151 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:31.361161 master-0 kubenswrapper[7776]: I0219 03:05:31.361111 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.362268 master-0 kubenswrapper[7776]: I0219 03:05:31.362220 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.362357 master-0 kubenswrapper[7776]: I0219 03:05:31.362297 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.362407 master-0 kubenswrapper[7776]: I0219 03:05:31.362383 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.362454 master-0 kubenswrapper[7776]: I0219 03:05:31.362422 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.403048 master-0 kubenswrapper[7776]: I0219 03:05:31.402958 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:31.662676 master-0 kubenswrapper[7776]: I0219 03:05:31.660764 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:33.498313 master-0 kubenswrapper[7776]: I0219 03:05:33.498235 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:33.504875 master-0 kubenswrapper[7776]: E0219 03:05:33.498391 7776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:33.504875 master-0 kubenswrapper[7776]: E0219 03:05:33.498491 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca podName:9bc23f57-1547-4351-a918-c0de8db211f4 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:41.498467166 +0000 UTC m=+47.838151734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca") pod "controller-manager-767fdf786d-rhhcr" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4") : configmap "client-ca" not found Feb 19 03:05:37.760898 master-0 kubenswrapper[7776]: I0219 03:05:37.760341 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:05:37.762490 master-0 kubenswrapper[7776]: I0219 03:05:37.762080 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:37.764174 master-0 kubenswrapper[7776]: I0219 03:05:37.764137 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:37.775824 master-0 kubenswrapper[7776]: I0219 03:05:37.775780 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:05:37.906462 master-0 kubenswrapper[7776]: I0219 03:05:37.906422 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:37.907522 master-0 kubenswrapper[7776]: I0219 03:05:37.907504 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:37.908007 master-0 kubenswrapper[7776]: I0219 03:05:37.907954 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.009736 master-0 kubenswrapper[7776]: I0219 03:05:38.009661 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.011430 master-0 kubenswrapper[7776]: I0219 03:05:38.010043 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.011430 master-0 kubenswrapper[7776]: I0219 03:05:38.010187 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.011430 master-0 kubenswrapper[7776]: I0219 03:05:38.010196 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.011430 master-0 kubenswrapper[7776]: I0219 03:05:38.010344 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.024271 master-0 kubenswrapper[7776]: I0219 03:05:38.024198 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.088226 master-0 kubenswrapper[7776]: I0219 03:05:38.088189 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:05:38.803789 master-0 kubenswrapper[7776]: I0219 03:05:38.803725 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:38.910705 master-0 kubenswrapper[7776]: I0219 03:05:38.910661 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:38.980155 master-0 kubenswrapper[7776]: I0219 03:05:38.980097 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:05:39.000511 master-0 kubenswrapper[7776]: W0219 03:05:39.000473 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode66ac991_af58_490b_8909_e518d301e1b8.slice/crio-efc170236b8ec5f3ee868c2762adf6da88d245375479a3b8c7878aa313bac925 WatchSource:0}: Error finding container efc170236b8ec5f3ee868c2762adf6da88d245375479a3b8c7878aa313bac925: Status 404 returned error can't find the container with id efc170236b8ec5f3ee868c2762adf6da88d245375479a3b8c7878aa313bac925 Feb 19 03:05:39.129520 master-0 kubenswrapper[7776]: I0219 03:05:39.129466 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") pod \"route-controller-manager-67f784c959-vwd2m\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:39.129695 master-0 kubenswrapper[7776]: E0219 03:05:39.129624 7776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 19 03:05:39.129766 master-0 kubenswrapper[7776]: E0219 03:05:39.129712 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca podName:17c6b469-2a89-439f-93a7-7cda9b524426 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:55.129694778 +0000 UTC m=+61.469379296 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca") pod "route-controller-manager-67f784c959-vwd2m" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426") : configmap "client-ca" not found Feb 19 03:05:39.359363 master-0 kubenswrapper[7776]: I0219 03:05:39.356722 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" event={"ID":"c50a2aec-7ed0-4114-8b25-19579fe931cb","Type":"ContainerStarted","Data":"9f0ee83fbf5f1f9171fac786023336518556aa5748c2cbbd3325b405f40722d4"} Feb 19 03:05:39.359363 master-0 kubenswrapper[7776]: I0219 03:05:39.357634 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:39.361073 master-0 kubenswrapper[7776]: I0219 03:05:39.361034 7776 generic.go:334] "Generic (PLEG): container finished" podID="c569676a-51dd-418c-87a5-719c18fe4c95" containerID="c4d5c5762019844ac155bf741ff3d970597445e33d552d25778d865bebcb593a" exitCode=0 Feb 19 03:05:39.361136 master-0 kubenswrapper[7776]: I0219 03:05:39.361120 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" event={"ID":"c569676a-51dd-418c-87a5-719c18fe4c95","Type":"ContainerDied","Data":"c4d5c5762019844ac155bf741ff3d970597445e33d552d25778d865bebcb593a"} Feb 19 03:05:39.366685 master-0 kubenswrapper[7776]: I0219 03:05:39.366634 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:05:39.376336 master-0 kubenswrapper[7776]: I0219 03:05:39.374688 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced"} Feb 19 03:05:39.376336 master-0 kubenswrapper[7776]: I0219 03:05:39.374990 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:39.377735 master-0 kubenswrapper[7776]: I0219 03:05:39.377453 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:05:39.377735 master-0 kubenswrapper[7776]: I0219 03:05:39.377494 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e66ac991-af58-490b-8909-e518d301e1b8","Type":"ContainerStarted","Data":"efc170236b8ec5f3ee868c2762adf6da88d245375479a3b8c7878aa313bac925"} Feb 19 03:05:39.377832 master-0 kubenswrapper[7776]: I0219 03:05:39.377607 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:05:39.381414 master-0 kubenswrapper[7776]: I0219 03:05:39.379474 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"fe4faf0d4ffb2ebe11ee7bb3c950e62a3098091a94099dff9022e530a80d494a"} Feb 19 03:05:39.381414 master-0 kubenswrapper[7776]: I0219 03:05:39.379573 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:05:39.389430 master-0 kubenswrapper[7776]: I0219 03:05:39.388610 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-clndn" event={"ID":"75c58162-a0ba-40f4-8894-38f17dc2fb6d","Type":"ContainerStarted","Data":"ca0b654b81a88bd0f716457f43165ff3868b8e1b078b1a7599f65109582be6fb"} Feb 19 03:05:39.389430 master-0 kubenswrapper[7776]: I0219 03:05:39.388646 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-clndn" event={"ID":"75c58162-a0ba-40f4-8894-38f17dc2fb6d","Type":"ContainerStarted","Data":"d763eef3f7735628beca09cc25a26e28082dce77f0a56f5a01e84938b9a8024f"} Feb 19 03:05:39.389430 master-0 kubenswrapper[7776]: I0219 03:05:39.389116 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-clndn" Feb 19 03:05:39.404338 master-0 kubenswrapper[7776]: I0219 03:05:39.402732 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" event={"ID":"80c48134-cb22-4cf9-b076-ce39af2f4113","Type":"ContainerStarted","Data":"cb63071c739ae541747950fdf6104d0cc4579cf39a32959ec4d2af1dc1d83348"} Feb 19 03:05:39.405628 master-0 kubenswrapper[7776]: I0219 03:05:39.404906 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"bb470156-b3c4-4ca6-80fd-30ea108aa201","Type":"ContainerStarted","Data":"d2e4fef767078540b44df24a8ab6723e1aefdd2eba60bc337a79b24d00e59e4c"} Feb 19 03:05:39.414095 master-0 kubenswrapper[7776]: I0219 03:05:39.414063 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerStarted","Data":"e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c"} Feb 19 03:05:39.414200 master-0 kubenswrapper[7776]: I0219 03:05:39.414109 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerStarted","Data":"7779bc9360a96d18f167a4e3e0b6db49a68f34d021af87222f6e2c102a74d376"} Feb 19 03:05:39.422409 master-0 kubenswrapper[7776]: I0219 03:05:39.422363 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hspwc" event={"ID":"6ae2cbe0-aa0a-4f26-994b-660fb962d995","Type":"ContainerStarted","Data":"4fcc491199ddeccc6ca442b8b6f06f03e8d45cb9a19ee7ae08555472f3e7dff9"} Feb 19 03:05:39.425075 master-0 kubenswrapper[7776]: I0219 03:05:39.424375 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" event={"ID":"b283bd8e-3339-4701-ae3c-f009e498b7d4","Type":"ContainerStarted","Data":"f117f68c424c44136addc4f41232b28970f81ece8b3106c93e04b13e16c4a0d4"} Feb 19 03:05:39.425178 master-0 kubenswrapper[7776]: I0219 03:05:39.425130 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:39.434841 master-0 kubenswrapper[7776]: I0219 03:05:39.434274 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:05:39.536448 master-0 kubenswrapper[7776]: I0219 03:05:39.536290 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-clndn" podStartSLOduration=4.199313408 podStartE2EDuration="14.536267221s" podCreationTimestamp="2026-02-19 03:05:25 +0000 UTC" firstStartedPulling="2026-02-19 03:05:28.119214082 +0000 UTC m=+34.458898600" lastFinishedPulling="2026-02-19 03:05:38.456167895 +0000 UTC m=+44.795852413" observedRunningTime="2026-02-19 03:05:39.535157101 +0000 UTC m=+45.874841619" watchObservedRunningTime="2026-02-19 03:05:39.536267221 +0000 UTC m=+45.875951739" Feb 19 03:05:39.660787 master-0 kubenswrapper[7776]: I0219 03:05:39.660746 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:05:39.661850 master-0 kubenswrapper[7776]: I0219 03:05:39.661800 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.673274 master-0 kubenswrapper[7776]: I0219 03:05:39.673216 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:05:39.749383 master-0 kubenswrapper[7776]: I0219 03:05:39.749344 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78l7\" (UniqueName: \"kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.749532 master-0 kubenswrapper[7776]: I0219 03:05:39.749434 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.749532 master-0 kubenswrapper[7776]: I0219 03:05:39.749465 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.850550 master-0 kubenswrapper[7776]: I0219 03:05:39.850451 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.850550 master-0 kubenswrapper[7776]: I0219 03:05:39.850488 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.850550 master-0 kubenswrapper[7776]: I0219 03:05:39.850535 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78l7\" (UniqueName: \"kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.851296 master-0 kubenswrapper[7776]: I0219 03:05:39.851239 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.851579 master-0 kubenswrapper[7776]: I0219 03:05:39.851535 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.869185 master-0 kubenswrapper[7776]: I0219 03:05:39.869144 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78l7\" (UniqueName: \"kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7\") pod \"redhat-marketplace-lwt4t\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:39.982811 master-0 kubenswrapper[7776]: I0219 03:05:39.982740 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:05:40.165315 master-0 kubenswrapper[7776]: I0219 03:05:40.165226 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt"] Feb 19 03:05:40.165631 master-0 kubenswrapper[7776]: I0219 03:05:40.165493 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" podUID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" containerName="cluster-version-operator" containerID="cri-o://e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0" gracePeriod=130 Feb 19 03:05:40.319875 master-0 kubenswrapper[7776]: I0219 03:05:40.319818 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:40.344314 master-0 kubenswrapper[7776]: I0219 03:05:40.344247 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:05:40.344498 master-0 kubenswrapper[7776]: E0219 03:05:40.344459 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" containerName="cluster-version-operator" Feb 19 03:05:40.344498 master-0 kubenswrapper[7776]: I0219 03:05:40.344473 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" containerName="cluster-version-operator" Feb 19 03:05:40.344582 master-0 kubenswrapper[7776]: I0219 03:05:40.344543 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" containerName="cluster-version-operator" Feb 19 03:05:40.345119 master-0 kubenswrapper[7776]: I0219 03:05:40.345099 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.367123 master-0 kubenswrapper[7776]: I0219 03:05:40.367064 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:05:40.403462 master-0 kubenswrapper[7776]: I0219 03:05:40.403282 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:05:40.413546 master-0 kubenswrapper[7776]: W0219 03:05:40.413487 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76050135_a8a1_4968_9a00_2d251c17f8b8.slice/crio-bc89aa84bf98aec00234431ef6e9f2fe11646021dc54fa12055d336972870e19 WatchSource:0}: Error finding container bc89aa84bf98aec00234431ef6e9f2fe11646021dc54fa12055d336972870e19: Status 404 returned error can't find the container with id bc89aa84bf98aec00234431ef6e9f2fe11646021dc54fa12055d336972870e19 Feb 19 03:05:40.437563 master-0 kubenswrapper[7776]: I0219 03:05:40.437327 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" event={"ID":"c569676a-51dd-418c-87a5-719c18fe4c95","Type":"ContainerStarted","Data":"4c03f7455f507d57f45851d2605d3c46058835f7f518ea0870a881bb1aed65f0"} Feb 19 03:05:40.437563 master-0 kubenswrapper[7776]: I0219 03:05:40.437409 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" event={"ID":"c569676a-51dd-418c-87a5-719c18fe4c95","Type":"ContainerStarted","Data":"478dbac58f814cfe00ac17a5b04c759de43594021a95bc27d10465967f520a11"} Feb 19 03:05:40.439968 master-0 kubenswrapper[7776]: I0219 03:05:40.439923 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerStarted","Data":"bc89aa84bf98aec00234431ef6e9f2fe11646021dc54fa12055d336972870e19"} Feb 19 03:05:40.445847 master-0 kubenswrapper[7776]: I0219 03:05:40.445759 7776 generic.go:334] "Generic (PLEG): container finished" podID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" containerID="e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0" exitCode=0 Feb 19 03:05:40.446036 master-0 kubenswrapper[7776]: I0219 03:05:40.445890 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" Feb 19 03:05:40.446173 master-0 kubenswrapper[7776]: I0219 03:05:40.446109 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" event={"ID":"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae","Type":"ContainerDied","Data":"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0"} Feb 19 03:05:40.446173 master-0 kubenswrapper[7776]: I0219 03:05:40.446170 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt" event={"ID":"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae","Type":"ContainerDied","Data":"63768e6c1bd9c6cb1c52062b8b293c9b2621a3ba99ae016ced4ba8c856a3dbff"} Feb 19 03:05:40.446303 master-0 kubenswrapper[7776]: I0219 03:05:40.446191 7776 scope.go:117] "RemoveContainer" containerID="e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0" Feb 19 03:05:40.448398 master-0 kubenswrapper[7776]: I0219 03:05:40.448329 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"bb470156-b3c4-4ca6-80fd-30ea108aa201","Type":"ContainerStarted","Data":"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615"} Feb 19 03:05:40.448558 master-0 kubenswrapper[7776]: I0219 03:05:40.448527 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="bb470156-b3c4-4ca6-80fd-30ea108aa201" containerName="installer" containerID="cri-o://b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615" gracePeriod=30 Feb 19 03:05:40.456997 master-0 kubenswrapper[7776]: I0219 03:05:40.456934 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hspwc" event={"ID":"6ae2cbe0-aa0a-4f26-994b-660fb962d995","Type":"ContainerStarted","Data":"16998c2d78463568ae786084e8f921919f3133952ee1850f3f3d945a386c8d6b"} Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459412 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") pod \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459470 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") pod \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459523 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") pod \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459559 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459572 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") pod \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459941 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") pod \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\" (UID: \"bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae\") " Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.459930 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.460164 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.460241 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.460302 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxjf\" (UniqueName: \"kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.460384 master-0 kubenswrapper[7776]: I0219 03:05:40.460343 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca" (OuterVolumeSpecName: "service-ca") pod "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:40.460891 master-0 kubenswrapper[7776]: I0219 03:05:40.460472 7776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:40.460891 master-0 kubenswrapper[7776]: I0219 03:05:40.460491 7776 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:40.460891 master-0 kubenswrapper[7776]: I0219 03:05:40.460504 7776 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:40.475496 master-0 kubenswrapper[7776]: I0219 03:05:40.472602 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:40.475496 master-0 kubenswrapper[7776]: I0219 03:05:40.472841 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" (UID: "bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:40.481616 master-0 kubenswrapper[7776]: I0219 03:05:40.481577 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e66ac991-af58-490b-8909-e518d301e1b8","Type":"ContainerStarted","Data":"3b56052892bbdb6a0a707a252be41dcb545d08cbba6bf07e4772ca254f1c641d"} Feb 19 03:05:40.484600 master-0 kubenswrapper[7776]: I0219 03:05:40.484581 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:05:40.491291 master-0 kubenswrapper[7776]: I0219 03:05:40.489232 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" podStartSLOduration=6.09836134 podStartE2EDuration="16.489215518s" podCreationTimestamp="2026-02-19 03:05:24 +0000 UTC" firstStartedPulling="2026-02-19 03:05:28.107071317 +0000 UTC m=+34.446755875" lastFinishedPulling="2026-02-19 03:05:38.497925535 +0000 UTC m=+44.837610053" observedRunningTime="2026-02-19 03:05:40.475773467 +0000 UTC m=+46.815457985" watchObservedRunningTime="2026-02-19 03:05:40.489215518 +0000 UTC m=+46.828900036" Feb 19 03:05:40.491291 master-0 kubenswrapper[7776]: I0219 03:05:40.490995 7776 scope.go:117] "RemoveContainer" containerID="e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0" Feb 19 03:05:40.491525 master-0 kubenswrapper[7776]: E0219 03:05:40.491505 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0\": container with ID starting with e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0 not found: ID does not exist" containerID="e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0" Feb 19 03:05:40.492907 master-0 kubenswrapper[7776]: I0219 03:05:40.491541 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0"} err="failed to get container status \"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0\": rpc error: code = NotFound desc = could not find container \"e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0\": container with ID starting with e85e36fe1c79bb291cdd511750155bda4edc146f1b673669d76cf74446dd12e0 not found: ID does not exist" Feb 19 03:05:40.497185 master-0 kubenswrapper[7776]: I0219 03:05:40.497126 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q"] Feb 19 03:05:40.499151 master-0 kubenswrapper[7776]: I0219 03:05:40.498249 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.506158 master-0 kubenswrapper[7776]: I0219 03:05:40.502749 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 19 03:05:40.506158 master-0 kubenswrapper[7776]: I0219 03:05:40.503550 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 19 03:05:40.506158 master-0 kubenswrapper[7776]: I0219 03:05:40.503714 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 19 03:05:40.506158 master-0 kubenswrapper[7776]: I0219 03:05:40.506010 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q"] Feb 19 03:05:40.509325 master-0 kubenswrapper[7776]: I0219 03:05:40.508107 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=9.508089304 podStartE2EDuration="9.508089304s" podCreationTimestamp="2026-02-19 03:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:40.506720257 +0000 UTC m=+46.846404785" watchObservedRunningTime="2026-02-19 03:05:40.508089304 +0000 UTC m=+46.847773822" Feb 19 03:05:40.511099 master-0 kubenswrapper[7776]: I0219 03:05:40.511046 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 19 03:05:40.532498 master-0 kubenswrapper[7776]: I0219 03:05:40.532417 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.5323963259999998 podStartE2EDuration="3.532396326s" podCreationTimestamp="2026-02-19 03:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:40.531767809 +0000 UTC m=+46.871452337" watchObservedRunningTime="2026-02-19 03:05:40.532396326 +0000 UTC m=+46.872080844" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.564434 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.564578 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.564683 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwxjf\" (UniqueName: \"kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.564726 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.564744 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:40.567701 master-0 kubenswrapper[7776]: I0219 03:05:40.566713 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.582294 master-0 kubenswrapper[7776]: I0219 03:05:40.581359 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.593149 master-0 kubenswrapper[7776]: I0219 03:05:40.592181 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwxjf\" (UniqueName: \"kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf\") pod \"redhat-operators-spsn7\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.608599 master-0 kubenswrapper[7776]: I0219 03:05:40.608495 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q"] Feb 19 03:05:40.609153 master-0 kubenswrapper[7776]: I0219 03:05:40.609117 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q"] Feb 19 03:05:40.609216 master-0 kubenswrapper[7776]: I0219 03:05:40.609202 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.612628 master-0 kubenswrapper[7776]: I0219 03:05:40.612496 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 19 03:05:40.612911 master-0 kubenswrapper[7776]: I0219 03:05:40.612886 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 19 03:05:40.612978 master-0 kubenswrapper[7776]: I0219 03:05:40.612945 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 19 03:05:40.653815 master-0 kubenswrapper[7776]: I0219 03:05:40.652151 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767fdf786d-rhhcr"] Feb 19 03:05:40.653815 master-0 kubenswrapper[7776]: E0219 03:05:40.652520 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" podUID="9bc23f57-1547-4351-a918-c0de8db211f4" Feb 19 03:05:40.660449 master-0 kubenswrapper[7776]: I0219 03:05:40.660419 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:05:40.666375 master-0 kubenswrapper[7776]: I0219 03:05:40.666285 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2wm\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.666375 master-0 kubenswrapper[7776]: I0219 03:05:40.666332 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.666579 master-0 kubenswrapper[7776]: I0219 03:05:40.666382 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.666579 master-0 kubenswrapper[7776]: I0219 03:05:40.666475 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.666579 master-0 kubenswrapper[7776]: I0219 03:05:40.666512 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.666579 master-0 kubenswrapper[7776]: I0219 03:05:40.666560 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.756556 master-0 kubenswrapper[7776]: I0219 03:05:40.754956 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m"] Feb 19 03:05:40.757060 master-0 kubenswrapper[7776]: E0219 03:05:40.757029 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" podUID="17c6b469-2a89-439f-93a7-7cda9b524426" Feb 19 03:05:40.768338 master-0 kubenswrapper[7776]: I0219 03:05:40.768305 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkxh\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.768520 master-0 kubenswrapper[7776]: I0219 03:05:40.768500 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.768654 master-0 kubenswrapper[7776]: I0219 03:05:40.768637 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.768767 master-0 kubenswrapper[7776]: I0219 03:05:40.768748 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.768887 master-0 kubenswrapper[7776]: I0219 03:05:40.768869 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.768986 master-0 kubenswrapper[7776]: I0219 03:05:40.768971 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.769081 master-0 kubenswrapper[7776]: I0219 03:05:40.769064 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.769183 master-0 kubenswrapper[7776]: I0219 03:05:40.769167 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.769311 master-0 kubenswrapper[7776]: I0219 03:05:40.769291 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.769423 master-0 kubenswrapper[7776]: I0219 03:05:40.769406 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2wm\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.769585 master-0 kubenswrapper[7776]: I0219 03:05:40.769558 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.769845 master-0 kubenswrapper[7776]: I0219 03:05:40.769820 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.770587 master-0 kubenswrapper[7776]: I0219 03:05:40.770493 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.771135 master-0 kubenswrapper[7776]: I0219 03:05:40.771086 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.796531 master-0 kubenswrapper[7776]: I0219 03:05:40.796496 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.805293 master-0 kubenswrapper[7776]: I0219 03:05:40.797943 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.806163 master-0 kubenswrapper[7776]: I0219 03:05:40.806131 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2wm\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:40.870205 master-0 kubenswrapper[7776]: I0219 03:05:40.870155 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870205 master-0 kubenswrapper[7776]: I0219 03:05:40.870204 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870720 master-0 kubenswrapper[7776]: I0219 03:05:40.870226 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870720 master-0 kubenswrapper[7776]: I0219 03:05:40.870279 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870720 master-0 kubenswrapper[7776]: I0219 03:05:40.870428 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870720 master-0 kubenswrapper[7776]: I0219 03:05:40.870497 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkxh\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.870957 master-0 kubenswrapper[7776]: E0219 03:05:40.870915 7776 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Feb 19 03:05:40.871015 master-0 kubenswrapper[7776]: E0219 03:05:40.870963 7776 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q: configmap "operator-controller-trusted-ca-bundle" not found Feb 19 03:05:40.871169 master-0 kubenswrapper[7776]: E0219 03:05:40.871147 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs podName:8f7d8fc8-c313-416f-b62b-b54db9944066 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:41.37111942 +0000 UTC m=+47.710803938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs") pod "operator-controller-controller-manager-9cc7d7bb-s559q" (UID: "8f7d8fc8-c313-416f-b62b-b54db9944066") : configmap "operator-controller-trusted-ca-bundle" not found Feb 19 03:05:40.871690 master-0 kubenswrapper[7776]: I0219 03:05:40.871662 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.872691 master-0 kubenswrapper[7776]: I0219 03:05:40.872661 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.873315 master-0 kubenswrapper[7776]: I0219 03:05:40.873288 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt"] Feb 19 03:05:40.876510 master-0 kubenswrapper[7776]: I0219 03:05:40.876458 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt"] Feb 19 03:05:40.902182 master-0 kubenswrapper[7776]: I0219 03:05:40.902090 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkxh\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:40.938089 master-0 kubenswrapper[7776]: I0219 03:05:40.937950 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-qjgq9"] Feb 19 03:05:40.952222 master-0 kubenswrapper[7776]: I0219 03:05:40.951994 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:40.955056 master-0 kubenswrapper[7776]: I0219 03:05:40.955001 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 03:05:40.955425 master-0 kubenswrapper[7776]: I0219 03:05:40.955383 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 03:05:40.955568 master-0 kubenswrapper[7776]: I0219 03:05:40.955535 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 03:05:40.967665 master-0 kubenswrapper[7776]: I0219 03:05:40.967617 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:41.010400 master-0 kubenswrapper[7776]: I0219 03:05:41.010366 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_bb470156-b3c4-4ca6-80fd-30ea108aa201/installer/0.log" Feb 19 03:05:41.010582 master-0 kubenswrapper[7776]: I0219 03:05:41.010425 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:41.074443 master-0 kubenswrapper[7776]: I0219 03:05:41.073752 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.074443 master-0 kubenswrapper[7776]: I0219 03:05:41.073866 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.074443 master-0 kubenswrapper[7776]: I0219 03:05:41.073887 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.074443 master-0 kubenswrapper[7776]: I0219 03:05:41.073919 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.074443 master-0 kubenswrapper[7776]: I0219 03:05:41.073941 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.130179 master-0 kubenswrapper[7776]: I0219 03:05:41.130135 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:05:41.174827 master-0 kubenswrapper[7776]: I0219 03:05:41.174762 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access\") pod \"bb470156-b3c4-4ca6-80fd-30ea108aa201\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " Feb 19 03:05:41.175033 master-0 kubenswrapper[7776]: I0219 03:05:41.174904 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir\") pod \"bb470156-b3c4-4ca6-80fd-30ea108aa201\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " Feb 19 03:05:41.175033 master-0 kubenswrapper[7776]: I0219 03:05:41.174990 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bb470156-b3c4-4ca6-80fd-30ea108aa201" (UID: "bb470156-b3c4-4ca6-80fd-30ea108aa201"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:41.175131 master-0 kubenswrapper[7776]: I0219 03:05:41.175080 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock\") pod \"bb470156-b3c4-4ca6-80fd-30ea108aa201\" (UID: \"bb470156-b3c4-4ca6-80fd-30ea108aa201\") " Feb 19 03:05:41.175175 master-0 kubenswrapper[7776]: I0219 03:05:41.175159 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock" (OuterVolumeSpecName: "var-lock") pod "bb470156-b3c4-4ca6-80fd-30ea108aa201" (UID: "bb470156-b3c4-4ca6-80fd-30ea108aa201"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:41.175373 master-0 kubenswrapper[7776]: I0219 03:05:41.175341 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.176025 master-0 kubenswrapper[7776]: I0219 03:05:41.175990 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.176094 master-0 kubenswrapper[7776]: I0219 03:05:41.176028 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.176153 master-0 kubenswrapper[7776]: I0219 03:05:41.176105 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.176196 master-0 kubenswrapper[7776]: I0219 03:05:41.176149 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.176237 master-0 kubenswrapper[7776]: I0219 03:05:41.176217 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.176320 master-0 kubenswrapper[7776]: I0219 03:05:41.176235 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb470156-b3c4-4ca6-80fd-30ea108aa201-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.176924 master-0 kubenswrapper[7776]: I0219 03:05:41.176876 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.177006 master-0 kubenswrapper[7776]: I0219 03:05:41.176936 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.177532 master-0 kubenswrapper[7776]: I0219 03:05:41.177485 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.186835 master-0 kubenswrapper[7776]: I0219 03:05:41.186777 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bb470156-b3c4-4ca6-80fd-30ea108aa201" (UID: "bb470156-b3c4-4ca6-80fd-30ea108aa201"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:41.186948 master-0 kubenswrapper[7776]: I0219 03:05:41.186835 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.193573 master-0 kubenswrapper[7776]: I0219 03:05:41.193305 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.277014 master-0 kubenswrapper[7776]: I0219 03:05:41.276956 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb470156-b3c4-4ca6-80fd-30ea108aa201-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.305932 master-0 kubenswrapper[7776]: I0219 03:05:41.305891 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:05:41.366629 master-0 kubenswrapper[7776]: I0219 03:05:41.366580 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk"] Feb 19 03:05:41.367195 master-0 kubenswrapper[7776]: E0219 03:05:41.367170 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb470156-b3c4-4ca6-80fd-30ea108aa201" containerName="installer" Feb 19 03:05:41.367360 master-0 kubenswrapper[7776]: I0219 03:05:41.367339 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb470156-b3c4-4ca6-80fd-30ea108aa201" containerName="installer" Feb 19 03:05:41.369968 master-0 kubenswrapper[7776]: I0219 03:05:41.369945 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb470156-b3c4-4ca6-80fd-30ea108aa201" containerName="installer" Feb 19 03:05:41.370914 master-0 kubenswrapper[7776]: I0219 03:05:41.370893 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.373579 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.373637 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.373703 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.374033 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.374174 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 03:05:41.374443 master-0 kubenswrapper[7776]: I0219 03:05:41.374380 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 03:05:41.374895 master-0 kubenswrapper[7776]: I0219 03:05:41.374513 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 03:05:41.374895 master-0 kubenswrapper[7776]: I0219 03:05:41.374645 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 03:05:41.375081 master-0 kubenswrapper[7776]: I0219 03:05:41.375037 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q"] Feb 19 03:05:41.377714 master-0 kubenswrapper[7776]: I0219 03:05:41.377682 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:41.378134 master-0 kubenswrapper[7776]: E0219 03:05:41.377942 7776 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Feb 19 03:05:41.378303 master-0 kubenswrapper[7776]: E0219 03:05:41.378281 7776 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q: configmap references non-existent config key: ca-bundle.crt Feb 19 03:05:41.378495 master-0 kubenswrapper[7776]: E0219 03:05:41.378474 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs podName:8f7d8fc8-c313-416f-b62b-b54db9944066 nodeName:}" failed. No retries permitted until 2026-02-19 03:05:42.378447276 +0000 UTC m=+48.718131814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs") pod "operator-controller-controller-manager-9cc7d7bb-s559q" (UID: "8f7d8fc8-c313-416f-b62b-b54db9944066") : configmap references non-existent config key: ca-bundle.crt Feb 19 03:05:41.383680 master-0 kubenswrapper[7776]: I0219 03:05:41.383629 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk"] Feb 19 03:05:41.388651 master-0 kubenswrapper[7776]: W0219 03:05:41.388040 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7012676e_f35d_46e5_83e8_a63172dd076e.slice/crio-7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb WatchSource:0}: Error finding container 7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb: Status 404 returned error can't find the container with id 7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb Feb 19 03:05:41.478944 master-0 kubenswrapper[7776]: I0219 03:05:41.478831 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.478944 master-0 kubenswrapper[7776]: I0219 03:05:41.478885 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.478944 master-0 kubenswrapper[7776]: I0219 03:05:41.478902 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.478944 master-0 kubenswrapper[7776]: I0219 03:05:41.478926 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.478944 master-0 kubenswrapper[7776]: I0219 03:05:41.478957 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.479315 master-0 kubenswrapper[7776]: I0219 03:05:41.479028 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.479315 master-0 kubenswrapper[7776]: I0219 03:05:41.479060 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz8r\" (UniqueName: \"kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.479315 master-0 kubenswrapper[7776]: I0219 03:05:41.479087 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.488173 master-0 kubenswrapper[7776]: I0219 03:05:41.488107 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_bb470156-b3c4-4ca6-80fd-30ea108aa201/installer/0.log" Feb 19 03:05:41.488373 master-0 kubenswrapper[7776]: I0219 03:05:41.488169 7776 generic.go:334] "Generic (PLEG): container finished" podID="bb470156-b3c4-4ca6-80fd-30ea108aa201" containerID="b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615" exitCode=1 Feb 19 03:05:41.488904 master-0 kubenswrapper[7776]: I0219 03:05:41.488857 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 19 03:05:41.489007 master-0 kubenswrapper[7776]: I0219 03:05:41.488827 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"bb470156-b3c4-4ca6-80fd-30ea108aa201","Type":"ContainerDied","Data":"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615"} Feb 19 03:05:41.489007 master-0 kubenswrapper[7776]: I0219 03:05:41.488989 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"bb470156-b3c4-4ca6-80fd-30ea108aa201","Type":"ContainerDied","Data":"d2e4fef767078540b44df24a8ab6723e1aefdd2eba60bc337a79b24d00e59e4c"} Feb 19 03:05:41.489145 master-0 kubenswrapper[7776]: I0219 03:05:41.489020 7776 scope.go:117] "RemoveContainer" containerID="b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615" Feb 19 03:05:41.496565 master-0 kubenswrapper[7776]: I0219 03:05:41.496524 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerStarted","Data":"0433548866cd3801c8b397fe3536ec33408d7af2a4a96c584b21e1d45a8f492e"} Feb 19 03:05:41.496653 master-0 kubenswrapper[7776]: I0219 03:05:41.496570 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerStarted","Data":"d8b8861a29ec4294bd11b25781775394a6ac15d030424306c0b690edecc2b3b2"} Feb 19 03:05:41.498827 master-0 kubenswrapper[7776]: I0219 03:05:41.498766 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerStarted","Data":"7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb"} Feb 19 03:05:41.505777 master-0 kubenswrapper[7776]: I0219 03:05:41.505726 7776 generic.go:334] "Generic (PLEG): container finished" podID="543aef8d-960a-42c9-b1fd-954e2d024002" containerID="82448f06439f9a9b0f7eb645f89270cc41ab666e5f3da84a7cb3fe527c78ba9b" exitCode=0 Feb 19 03:05:41.505844 master-0 kubenswrapper[7776]: I0219 03:05:41.505809 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerDied","Data":"82448f06439f9a9b0f7eb645f89270cc41ab666e5f3da84a7cb3fe527c78ba9b"} Feb 19 03:05:41.505903 master-0 kubenswrapper[7776]: I0219 03:05:41.505846 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerStarted","Data":"b502d1e6d3dfc70af9bc93fe4e3abd4f51e92d96f25b5329cdda631631649d28"} Feb 19 03:05:41.508609 master-0 kubenswrapper[7776]: I0219 03:05:41.508572 7776 generic.go:334] "Generic (PLEG): container finished" podID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerID="47f7612f33e0cf94efc85de328887bb9cd61c80cca5e28cf021feca142ca3510" exitCode=0 Feb 19 03:05:41.508717 master-0 kubenswrapper[7776]: I0219 03:05:41.508689 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerDied","Data":"47f7612f33e0cf94efc85de328887bb9cd61c80cca5e28cf021feca142ca3510"} Feb 19 03:05:41.508948 master-0 kubenswrapper[7776]: I0219 03:05:41.508910 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:41.509121 master-0 kubenswrapper[7776]: I0219 03:05:41.509093 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:41.511643 master-0 kubenswrapper[7776]: I0219 03:05:41.511625 7776 scope.go:117] "RemoveContainer" containerID="b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615" Feb 19 03:05:41.512394 master-0 kubenswrapper[7776]: E0219 03:05:41.512343 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615\": container with ID starting with b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615 not found: ID does not exist" containerID="b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615" Feb 19 03:05:41.512469 master-0 kubenswrapper[7776]: I0219 03:05:41.512397 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615"} err="failed to get container status \"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615\": rpc error: code = NotFound desc = could not find container \"b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615\": container with ID starting with b60f3d041d86feec298c13e7d36f284273ac90ab1302c89f71abd68569e13615 not found: ID does not exist" Feb 19 03:05:41.514080 master-0 kubenswrapper[7776]: I0219 03:05:41.514022 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" podStartSLOduration=1.5140067209999999 podStartE2EDuration="1.514006721s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:41.510449746 +0000 UTC m=+47.850134264" watchObservedRunningTime="2026-02-19 03:05:41.514006721 +0000 UTC m=+47.853691229" Feb 19 03:05:41.535879 master-0 kubenswrapper[7776]: I0219 03:05:41.535828 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:41.538007 master-0 kubenswrapper[7776]: I0219 03:05:41.537973 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:41.578007 master-0 kubenswrapper[7776]: I0219 03:05:41.577976 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:41.578723 master-0 kubenswrapper[7776]: I0219 03:05:41.578683 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 19 03:05:41.580738 master-0 kubenswrapper[7776]: I0219 03:05:41.580671 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.580830 master-0 kubenswrapper[7776]: I0219 03:05:41.580805 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.580895 master-0 kubenswrapper[7776]: I0219 03:05:41.580871 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz8r\" (UniqueName: \"kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.580945 master-0 kubenswrapper[7776]: I0219 03:05:41.580904 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.580945 master-0 kubenswrapper[7776]: I0219 03:05:41.580922 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:41.581026 master-0 kubenswrapper[7776]: I0219 03:05:41.580961 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.581026 master-0 kubenswrapper[7776]: I0219 03:05:41.580979 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.581026 master-0 kubenswrapper[7776]: I0219 03:05:41.580998 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.581141 master-0 kubenswrapper[7776]: I0219 03:05:41.581041 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.581370 master-0 kubenswrapper[7776]: I0219 03:05:41.581337 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.581705 master-0 kubenswrapper[7776]: I0219 03:05:41.581673 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.582516 master-0 kubenswrapper[7776]: I0219 03:05:41.582459 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.582665 master-0 kubenswrapper[7776]: I0219 03:05:41.582629 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"controller-manager-767fdf786d-rhhcr\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:41.583235 master-0 kubenswrapper[7776]: I0219 03:05:41.583202 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.584690 master-0 kubenswrapper[7776]: I0219 03:05:41.584634 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.584881 master-0 kubenswrapper[7776]: I0219 03:05:41.584844 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.594601 master-0 kubenswrapper[7776]: I0219 03:05:41.594549 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:41.596877 master-0 kubenswrapper[7776]: I0219 03:05:41.595209 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.600568 master-0 kubenswrapper[7776]: I0219 03:05:41.598610 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.603099 master-0 kubenswrapper[7776]: I0219 03:05:41.601837 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz8r\" (UniqueName: \"kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.604328 master-0 kubenswrapper[7776]: I0219 03:05:41.604110 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:41.683749 master-0 kubenswrapper[7776]: I0219 03:05:41.683699 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert\") pod \"17c6b469-2a89-439f-93a7-7cda9b524426\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " Feb 19 03:05:41.683749 master-0 kubenswrapper[7776]: I0219 03:05:41.683749 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config\") pod \"17c6b469-2a89-439f-93a7-7cda9b524426\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683776 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert\") pod \"9bc23f57-1547-4351-a918-c0de8db211f4\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683801 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config\") pod \"9bc23f57-1547-4351-a918-c0de8db211f4\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683829 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7sdk\" (UniqueName: \"kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk\") pod \"9bc23f57-1547-4351-a918-c0de8db211f4\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683870 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") pod \"9bc23f57-1547-4351-a918-c0de8db211f4\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683888 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmc7d\" (UniqueName: \"kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d\") pod \"17c6b469-2a89-439f-93a7-7cda9b524426\" (UID: \"17c6b469-2a89-439f-93a7-7cda9b524426\") " Feb 19 03:05:41.683990 master-0 kubenswrapper[7776]: I0219 03:05:41.683912 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles\") pod \"9bc23f57-1547-4351-a918-c0de8db211f4\" (UID: \"9bc23f57-1547-4351-a918-c0de8db211f4\") " Feb 19 03:05:41.684206 master-0 kubenswrapper[7776]: I0219 03:05:41.684184 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.684338 master-0 kubenswrapper[7776]: I0219 03:05:41.684320 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.684384 master-0 kubenswrapper[7776]: I0219 03:05:41.684373 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.684471 master-0 kubenswrapper[7776]: I0219 03:05:41.684443 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config" (OuterVolumeSpecName: "config") pod "17c6b469-2a89-439f-93a7-7cda9b524426" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:41.684933 master-0 kubenswrapper[7776]: I0219 03:05:41.684863 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config" (OuterVolumeSpecName: "config") pod "9bc23f57-1547-4351-a918-c0de8db211f4" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:41.685845 master-0 kubenswrapper[7776]: I0219 03:05:41.685784 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bc23f57-1547-4351-a918-c0de8db211f4" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:41.685963 master-0 kubenswrapper[7776]: I0219 03:05:41.685854 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bc23f57-1547-4351-a918-c0de8db211f4" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:05:41.687562 master-0 kubenswrapper[7776]: I0219 03:05:41.687531 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bc23f57-1547-4351-a918-c0de8db211f4" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:41.688940 master-0 kubenswrapper[7776]: I0219 03:05:41.688866 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17c6b469-2a89-439f-93a7-7cda9b524426" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:05:41.690841 master-0 kubenswrapper[7776]: I0219 03:05:41.690814 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d" (OuterVolumeSpecName: "kube-api-access-gmc7d") pod "17c6b469-2a89-439f-93a7-7cda9b524426" (UID: "17c6b469-2a89-439f-93a7-7cda9b524426"). InnerVolumeSpecName "kube-api-access-gmc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:41.691100 master-0 kubenswrapper[7776]: I0219 03:05:41.691059 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk" (OuterVolumeSpecName: "kube-api-access-w7sdk") pod "9bc23f57-1547-4351-a918-c0de8db211f4" (UID: "9bc23f57-1547-4351-a918-c0de8db211f4"). InnerVolumeSpecName "kube-api-access-w7sdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:41.704133 master-0 kubenswrapper[7776]: I0219 03:05:41.704037 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:41.762686 master-0 kubenswrapper[7776]: I0219 03:05:41.755488 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:05:41.762686 master-0 kubenswrapper[7776]: I0219 03:05:41.756554 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.766320 master-0 kubenswrapper[7776]: I0219 03:05:41.766237 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.785893 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786313 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786438 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786469 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786501 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786556 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786573 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmc7d\" (UniqueName: \"kubernetes.io/projected/17c6b469-2a89-439f-93a7-7cda9b524426-kube-api-access-gmc7d\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786603 7776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786614 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c6b469-2a89-439f-93a7-7cda9b524426-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786624 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786632 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc23f57-1547-4351-a918-c0de8db211f4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786641 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc23f57-1547-4351-a918-c0de8db211f4-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.788593 master-0 kubenswrapper[7776]: I0219 03:05:41.786650 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7sdk\" (UniqueName: \"kubernetes.io/projected/9bc23f57-1547-4351-a918-c0de8db211f4-kube-api-access-w7sdk\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:41.801467 master-0 kubenswrapper[7776]: I0219 03:05:41.801424 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.851572 master-0 kubenswrapper[7776]: I0219 03:05:41.848960 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb470156-b3c4-4ca6-80fd-30ea108aa201" path="/var/lib/kubelet/pods/bb470156-b3c4-4ca6-80fd-30ea108aa201/volumes" Feb 19 03:05:41.851572 master-0 kubenswrapper[7776]: I0219 03:05:41.849619 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae" path="/var/lib/kubelet/pods/bc9c1b5c-ec52-4a34-8aa3-3e1b1a2a60ae/volumes" Feb 19 03:05:41.887822 master-0 kubenswrapper[7776]: I0219 03:05:41.887761 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.887822 master-0 kubenswrapper[7776]: I0219 03:05:41.887817 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l658w\" (UniqueName: \"kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.888495 master-0 kubenswrapper[7776]: I0219 03:05:41.888027 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.912491 master-0 kubenswrapper[7776]: I0219 03:05:41.912414 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:41.912491 master-0 kubenswrapper[7776]: I0219 03:05:41.912483 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:41.913362 master-0 kubenswrapper[7776]: I0219 03:05:41.913331 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: I0219 03:05:41.935427 7776 patch_prober.go:28] interesting pod/apiserver-957b9456f-f5s8c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]log ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]etcd ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/max-in-flight-filter ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/openshift.io-startinformers ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: livez check failed Feb 19 03:05:41.937417 master-0 kubenswrapper[7776]: I0219 03:05:41.935497 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" podUID="c569676a-51dd-418c-87a5-719c18fe4c95" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:05:41.989693 master-0 kubenswrapper[7776]: I0219 03:05:41.989639 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.989693 master-0 kubenswrapper[7776]: I0219 03:05:41.989697 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l658w\" (UniqueName: \"kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.989958 master-0 kubenswrapper[7776]: I0219 03:05:41.989867 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.990518 master-0 kubenswrapper[7776]: I0219 03:05:41.990492 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:41.990518 master-0 kubenswrapper[7776]: I0219 03:05:41.990515 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:42.010061 master-0 kubenswrapper[7776]: I0219 03:05:42.010003 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l658w\" (UniqueName: \"kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w\") pod \"certified-operators-9h524\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:42.094497 master-0 kubenswrapper[7776]: I0219 03:05:42.094430 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:05:42.170531 master-0 kubenswrapper[7776]: I0219 03:05:42.168462 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk"] Feb 19 03:05:42.330819 master-0 kubenswrapper[7776]: I0219 03:05:42.330568 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:42.396233 master-0 kubenswrapper[7776]: I0219 03:05:42.396000 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:42.406973 master-0 kubenswrapper[7776]: I0219 03:05:42.406634 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:42.490016 master-0 kubenswrapper[7776]: I0219 03:05:42.489708 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:42.524279 master-0 kubenswrapper[7776]: I0219 03:05:42.523661 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" event={"ID":"ace60ebd-e405-4fd2-96fe-7b16a9e11a07","Type":"ContainerStarted","Data":"1082261815c7e19c2e96bf70a147ae8ad719192a52e2b659efb185314dc947a8"} Feb 19 03:05:42.528137 master-0 kubenswrapper[7776]: I0219 03:05:42.528075 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerStarted","Data":"63378086041fcb0de956f1a5a160faad6c0e85b100c25eacbce569a26a79079c"} Feb 19 03:05:42.528137 master-0 kubenswrapper[7776]: I0219 03:05:42.528113 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerStarted","Data":"b653d15e9a094fb6f27e9f1174c70b153d2193bac59f07f3285a73537a189385"} Feb 19 03:05:42.535471 master-0 kubenswrapper[7776]: I0219 03:05:42.531247 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:42.535471 master-0 kubenswrapper[7776]: I0219 03:05:42.533013 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:05:42.551005 master-0 kubenswrapper[7776]: I0219 03:05:42.550359 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"7905c351-2cbd-45b5-aa86-3b577ae11446","Type":"ContainerStarted","Data":"e1dd9d048901893befe823f624b02370947b83bbbb3e55e0c34398ddbe1fbe88"} Feb 19 03:05:42.551005 master-0 kubenswrapper[7776]: I0219 03:05:42.550433 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m" Feb 19 03:05:42.551005 master-0 kubenswrapper[7776]: I0219 03:05:42.550763 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fdf786d-rhhcr" Feb 19 03:05:42.576421 master-0 kubenswrapper[7776]: W0219 03:05:42.575994 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9789abc0_e82f_4d1a_ba50_faf0075d9139.slice/crio-1512a730540d9efdc942e8b16c196674f3900de559f11753505a3ff018b1af97 WatchSource:0}: Error finding container 1512a730540d9efdc942e8b16c196674f3900de559f11753505a3ff018b1af97: Status 404 returned error can't find the container with id 1512a730540d9efdc942e8b16c196674f3900de559f11753505a3ff018b1af97 Feb 19 03:05:42.609938 master-0 kubenswrapper[7776]: I0219 03:05:42.609842 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podStartSLOduration=2.60982043 podStartE2EDuration="2.60982043s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:42.549185194 +0000 UTC m=+48.888869702" watchObservedRunningTime="2026-02-19 03:05:42.60982043 +0000 UTC m=+48.949504948" Feb 19 03:05:42.637769 master-0 kubenswrapper[7776]: I0219 03:05:42.637525 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767fdf786d-rhhcr"] Feb 19 03:05:42.640479 master-0 kubenswrapper[7776]: I0219 03:05:42.640247 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-767fdf786d-rhhcr"] Feb 19 03:05:42.674294 master-0 kubenswrapper[7776]: I0219 03:05:42.674206 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m"] Feb 19 03:05:42.676628 master-0 kubenswrapper[7776]: I0219 03:05:42.676594 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m"] Feb 19 03:05:42.752914 master-0 kubenswrapper[7776]: I0219 03:05:42.751395 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:05:42.752914 master-0 kubenswrapper[7776]: I0219 03:05:42.752450 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:42.774609 master-0 kubenswrapper[7776]: I0219 03:05:42.774523 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:05:42.804247 master-0 kubenswrapper[7776]: I0219 03:05:42.804195 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c6b469-2a89-439f-93a7-7cda9b524426-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:42.905503 master-0 kubenswrapper[7776]: I0219 03:05:42.905382 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmkz\" (UniqueName: \"kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:42.905503 master-0 kubenswrapper[7776]: I0219 03:05:42.905453 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:42.906001 master-0 kubenswrapper[7776]: I0219 03:05:42.905535 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:42.940842 master-0 kubenswrapper[7776]: I0219 03:05:42.940782 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q"] Feb 19 03:05:43.006575 master-0 kubenswrapper[7776]: I0219 03:05:43.006536 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.006791 master-0 kubenswrapper[7776]: I0219 03:05:43.006592 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhmkz\" (UniqueName: \"kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.006832 master-0 kubenswrapper[7776]: I0219 03:05:43.006782 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.008044 master-0 kubenswrapper[7776]: I0219 03:05:43.007016 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.008044 master-0 kubenswrapper[7776]: I0219 03:05:43.007131 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.035565 master-0 kubenswrapper[7776]: I0219 03:05:43.034974 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhmkz\" (UniqueName: \"kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz\") pod \"community-operators-2cczk\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.157222 master-0 kubenswrapper[7776]: I0219 03:05:43.156712 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:05:43.559579 master-0 kubenswrapper[7776]: I0219 03:05:43.559445 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"7905c351-2cbd-45b5-aa86-3b577ae11446","Type":"ContainerStarted","Data":"3a855b547331c157da72e995ed266ebf91423c520f5d004348d6b6728172313e"} Feb 19 03:05:43.564808 master-0 kubenswrapper[7776]: I0219 03:05:43.564761 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerStarted","Data":"26d440572b09c4ca16fc7c4bf03dae16d65fb0b4990f333d824255de7f8cc6d0"} Feb 19 03:05:43.564808 master-0 kubenswrapper[7776]: I0219 03:05:43.564812 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerStarted","Data":"63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5"} Feb 19 03:05:43.565047 master-0 kubenswrapper[7776]: I0219 03:05:43.564828 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerStarted","Data":"cf1ab0e9895c4d3c13750afafa4343da7c7b17306bc49f279de7d38a89a47c8d"} Feb 19 03:05:43.565573 master-0 kubenswrapper[7776]: I0219 03:05:43.565545 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:43.567710 master-0 kubenswrapper[7776]: I0219 03:05:43.567679 7776 generic.go:334] "Generic (PLEG): container finished" podID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerID="f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa" exitCode=0 Feb 19 03:05:43.568387 master-0 kubenswrapper[7776]: I0219 03:05:43.568345 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerDied","Data":"f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa"} Feb 19 03:05:43.568458 master-0 kubenswrapper[7776]: I0219 03:05:43.568401 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerStarted","Data":"1512a730540d9efdc942e8b16c196674f3900de559f11753505a3ff018b1af97"} Feb 19 03:05:43.621356 master-0 kubenswrapper[7776]: I0219 03:05:43.621103 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.62108752 podStartE2EDuration="2.62108752s" podCreationTimestamp="2026-02-19 03:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:43.58006758 +0000 UTC m=+49.919752098" watchObservedRunningTime="2026-02-19 03:05:43.62108752 +0000 UTC m=+49.960772038" Feb 19 03:05:43.623220 master-0 kubenswrapper[7776]: I0219 03:05:43.623173 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" podStartSLOduration=3.623166616 podStartE2EDuration="3.623166616s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:05:43.619900538 +0000 UTC m=+49.959585066" watchObservedRunningTime="2026-02-19 03:05:43.623166616 +0000 UTC m=+49.962851134" Feb 19 03:05:43.854617 master-0 kubenswrapper[7776]: I0219 03:05:43.854555 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c6b469-2a89-439f-93a7-7cda9b524426" path="/var/lib/kubelet/pods/17c6b469-2a89-439f-93a7-7cda9b524426/volumes" Feb 19 03:05:43.855084 master-0 kubenswrapper[7776]: I0219 03:05:43.855055 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc23f57-1547-4351-a918-c0de8db211f4" path="/var/lib/kubelet/pods/9bc23f57-1547-4351-a918-c0de8db211f4/volumes" Feb 19 03:05:43.977289 master-0 kubenswrapper[7776]: I0219 03:05:43.977088 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:05:44.579415 master-0 kubenswrapper[7776]: I0219 03:05:44.577931 7776 generic.go:334] "Generic (PLEG): container finished" podID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerID="3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2" exitCode=0 Feb 19 03:05:44.579415 master-0 kubenswrapper[7776]: I0219 03:05:44.578135 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerDied","Data":"3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2"} Feb 19 03:05:44.579415 master-0 kubenswrapper[7776]: I0219 03:05:44.578215 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerStarted","Data":"739328e3c80f5d92997dce3955ee26103a58c696507370455c7f3d7bb7efb16c"} Feb 19 03:05:44.621283 master-0 kubenswrapper[7776]: I0219 03:05:44.619617 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:05:44.621283 master-0 kubenswrapper[7776]: I0219 03:05:44.620247 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.621902 master-0 kubenswrapper[7776]: I0219 03:05:44.621873 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:05:44.622805 master-0 kubenswrapper[7776]: I0219 03:05:44.622789 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.628042 master-0 kubenswrapper[7776]: I0219 03:05:44.627945 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:05:44.629017 master-0 kubenswrapper[7776]: I0219 03:05:44.628640 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:44.629017 master-0 kubenswrapper[7776]: I0219 03:05:44.628664 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:44.629017 master-0 kubenswrapper[7776]: I0219 03:05:44.628790 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:05:44.630150 master-0 kubenswrapper[7776]: I0219 03:05:44.630132 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:05:44.630282 master-0 kubenswrapper[7776]: I0219 03:05:44.630247 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:05:44.631283 master-0 kubenswrapper[7776]: I0219 03:05:44.630460 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:05:44.633026 master-0 kubenswrapper[7776]: I0219 03:05:44.632038 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:05:44.633026 master-0 kubenswrapper[7776]: I0219 03:05:44.632715 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:05:44.637601 master-0 kubenswrapper[7776]: I0219 03:05:44.636529 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:05:44.642618 master-0 kubenswrapper[7776]: I0219 03:05:44.642585 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:05:44.642965 master-0 kubenswrapper[7776]: I0219 03:05:44.642944 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:05:44.646206 master-0 kubenswrapper[7776]: I0219 03:05:44.646147 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:05:44.731525 master-0 kubenswrapper[7776]: I0219 03:05:44.731425 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.731723 master-0 kubenswrapper[7776]: I0219 03:05:44.731627 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.731723 master-0 kubenswrapper[7776]: I0219 03:05:44.731676 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.731800 master-0 kubenswrapper[7776]: I0219 03:05:44.731723 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzzx\" (UniqueName: \"kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.731800 master-0 kubenswrapper[7776]: I0219 03:05:44.731771 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.732385 master-0 kubenswrapper[7776]: I0219 03:05:44.731878 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.732385 master-0 kubenswrapper[7776]: I0219 03:05:44.732222 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.732385 master-0 kubenswrapper[7776]: I0219 03:05:44.732245 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.732503 master-0 kubenswrapper[7776]: I0219 03:05:44.732450 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz4j7\" (UniqueName: \"kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.834424 master-0 kubenswrapper[7776]: I0219 03:05:44.834295 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.834424 master-0 kubenswrapper[7776]: I0219 03:05:44.834356 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.834796 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz4j7\" (UniqueName: \"kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.834945 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.835018 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.835054 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.835096 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzzzx\" (UniqueName: \"kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.835122 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.835469 master-0 kubenswrapper[7776]: I0219 03:05:44.835158 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.835744 master-0 kubenswrapper[7776]: I0219 03:05:44.835622 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.836810 master-0 kubenswrapper[7776]: I0219 03:05:44.836514 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.836810 master-0 kubenswrapper[7776]: I0219 03:05:44.836744 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.836810 master-0 kubenswrapper[7776]: I0219 03:05:44.836765 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.837144 master-0 kubenswrapper[7776]: I0219 03:05:44.837109 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.860284 master-0 kubenswrapper[7776]: I0219 03:05:44.855329 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.860284 master-0 kubenswrapper[7776]: I0219 03:05:44.855557 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.860284 master-0 kubenswrapper[7776]: I0219 03:05:44.856096 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz4j7\" (UniqueName: \"kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7\") pod \"route-controller-manager-84d87bdd5b-7p6kp\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.860284 master-0 kubenswrapper[7776]: I0219 03:05:44.858576 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzzzx\" (UniqueName: \"kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx\") pod \"controller-manager-7d4cccb57c-sfb9j\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:44.954591 master-0 kubenswrapper[7776]: I0219 03:05:44.954539 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:44.968249 master-0 kubenswrapper[7776]: I0219 03:05:44.968191 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:05:45.738475 master-0 kubenswrapper[7776]: I0219 03:05:45.738221 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:05:45.769882 master-0 kubenswrapper[7776]: W0219 03:05:45.769807 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7a5635_30b4_4076_babb_db1abd26da88.slice/crio-9aab81f8fffe16923e36dcbe72b0019b49222f1dac9a784d86a86eaf9cc57c9d WatchSource:0}: Error finding container 9aab81f8fffe16923e36dcbe72b0019b49222f1dac9a784d86a86eaf9cc57c9d: Status 404 returned error can't find the container with id 9aab81f8fffe16923e36dcbe72b0019b49222f1dac9a784d86a86eaf9cc57c9d Feb 19 03:05:45.800537 master-0 kubenswrapper[7776]: I0219 03:05:45.798689 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:05:45.805444 master-0 kubenswrapper[7776]: W0219 03:05:45.805406 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b9ea7b_01b1_48f8_a392_12200f55502e.slice/crio-da9326e28b041d7dc63f371ad8d216b0ae776b310880756403a8af27c882da99 WatchSource:0}: Error finding container da9326e28b041d7dc63f371ad8d216b0ae776b310880756403a8af27c882da99: Status 404 returned error can't find the container with id da9326e28b041d7dc63f371ad8d216b0ae776b310880756403a8af27c882da99 Feb 19 03:05:46.594830 master-0 kubenswrapper[7776]: I0219 03:05:46.594753 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" event={"ID":"92b9ea7b-01b1-48f8-a392-12200f55502e","Type":"ContainerStarted","Data":"da9326e28b041d7dc63f371ad8d216b0ae776b310880756403a8af27c882da99"} Feb 19 03:05:46.596821 master-0 kubenswrapper[7776]: I0219 03:05:46.596790 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerStarted","Data":"9aab81f8fffe16923e36dcbe72b0019b49222f1dac9a784d86a86eaf9cc57c9d"} Feb 19 03:05:46.598691 master-0 kubenswrapper[7776]: I0219 03:05:46.598648 7776 generic.go:334] "Generic (PLEG): container finished" podID="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" containerID="00383a3b1620e8684e45d2ccf8b35cd07d1cb7977fd9a3bb5991a646c38a78c8" exitCode=0 Feb 19 03:05:46.598691 master-0 kubenswrapper[7776]: I0219 03:05:46.598684 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" event={"ID":"ace60ebd-e405-4fd2-96fe-7b16a9e11a07","Type":"ContainerDied","Data":"00383a3b1620e8684e45d2ccf8b35cd07d1cb7977fd9a3bb5991a646c38a78c8"} Feb 19 03:05:46.920405 master-0 kubenswrapper[7776]: I0219 03:05:46.920325 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:46.928465 master-0 kubenswrapper[7776]: I0219 03:05:46.928426 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:05:47.605347 master-0 kubenswrapper[7776]: I0219 03:05:47.605186 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" event={"ID":"ace60ebd-e405-4fd2-96fe-7b16a9e11a07","Type":"ContainerStarted","Data":"1e4ac9bf8043aeb416e03b6f82f66af29a71f6bcdb810c2bb330cdc1baa4ea2d"} Feb 19 03:05:48.410233 master-0 kubenswrapper[7776]: I0219 03:05:48.409057 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" podStartSLOduration=4.204125815 podStartE2EDuration="7.409039146s" podCreationTimestamp="2026-02-19 03:05:41 +0000 UTC" firstStartedPulling="2026-02-19 03:05:42.199726342 +0000 UTC m=+48.539410860" lastFinishedPulling="2026-02-19 03:05:45.404639673 +0000 UTC m=+51.744324191" observedRunningTime="2026-02-19 03:05:48.408695397 +0000 UTC m=+54.748379925" watchObservedRunningTime="2026-02-19 03:05:48.409039146 +0000 UTC m=+54.748723674" Feb 19 03:05:48.997433 master-0 kubenswrapper[7776]: I0219 03:05:48.997383 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:48.997759 master-0 kubenswrapper[7776]: I0219 03:05:48.997598 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="7905c351-2cbd-45b5-aa86-3b577ae11446" containerName="installer" containerID="cri-o://3a855b547331c157da72e995ed266ebf91423c520f5d004348d6b6728172313e" gracePeriod=30 Feb 19 03:05:49.621343 master-0 kubenswrapper[7776]: I0219 03:05:49.621183 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerStarted","Data":"28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706"} Feb 19 03:05:49.622071 master-0 kubenswrapper[7776]: I0219 03:05:49.622032 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:49.626427 master-0 kubenswrapper[7776]: I0219 03:05:49.626387 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_7905c351-2cbd-45b5-aa86-3b577ae11446/installer/0.log" Feb 19 03:05:49.626535 master-0 kubenswrapper[7776]: I0219 03:05:49.626452 7776 generic.go:334] "Generic (PLEG): container finished" podID="7905c351-2cbd-45b5-aa86-3b577ae11446" containerID="3a855b547331c157da72e995ed266ebf91423c520f5d004348d6b6728172313e" exitCode=1 Feb 19 03:05:49.626535 master-0 kubenswrapper[7776]: I0219 03:05:49.626485 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"7905c351-2cbd-45b5-aa86-3b577ae11446","Type":"ContainerDied","Data":"3a855b547331c157da72e995ed266ebf91423c520f5d004348d6b6728172313e"} Feb 19 03:05:49.628580 master-0 kubenswrapper[7776]: I0219 03:05:49.628547 7776 patch_prober.go:28] interesting pod/route-controller-manager-84d87bdd5b-7p6kp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 19 03:05:49.628632 master-0 kubenswrapper[7776]: I0219 03:05:49.628587 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 19 03:05:49.649863 master-0 kubenswrapper[7776]: I0219 03:05:49.642027 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" podStartSLOduration=6.026504301 podStartE2EDuration="9.642010223s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="2026-02-19 03:05:45.772579151 +0000 UTC m=+52.112263679" lastFinishedPulling="2026-02-19 03:05:49.388085083 +0000 UTC m=+55.727769601" observedRunningTime="2026-02-19 03:05:49.641623723 +0000 UTC m=+55.981308261" watchObservedRunningTime="2026-02-19 03:05:49.642010223 +0000 UTC m=+55.981694741" Feb 19 03:05:49.766283 master-0 kubenswrapper[7776]: I0219 03:05:49.766007 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 19 03:05:49.768307 master-0 kubenswrapper[7776]: I0219 03:05:49.767530 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:49.771908 master-0 kubenswrapper[7776]: I0219 03:05:49.771665 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 03:05:49.780639 master-0 kubenswrapper[7776]: I0219 03:05:49.776834 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 19 03:05:49.912364 master-0 kubenswrapper[7776]: I0219 03:05:49.912309 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:49.913069 master-0 kubenswrapper[7776]: I0219 03:05:49.912412 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:49.913069 master-0 kubenswrapper[7776]: I0219 03:05:49.912465 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.014151 master-0 kubenswrapper[7776]: I0219 03:05:50.014085 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.014389 master-0 kubenswrapper[7776]: I0219 03:05:50.014172 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.014389 master-0 kubenswrapper[7776]: I0219 03:05:50.014205 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.015446 master-0 kubenswrapper[7776]: I0219 03:05:50.015140 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.015446 master-0 kubenswrapper[7776]: I0219 03:05:50.015199 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.039369 master-0 kubenswrapper[7776]: I0219 03:05:50.038457 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.104837 master-0 kubenswrapper[7776]: I0219 03:05:50.104777 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:05:50.635497 master-0 kubenswrapper[7776]: I0219 03:05:50.635449 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:05:50.776594 master-0 kubenswrapper[7776]: I0219 03:05:50.776560 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_7905c351-2cbd-45b5-aa86-3b577ae11446/installer/0.log" Feb 19 03:05:50.776753 master-0 kubenswrapper[7776]: I0219 03:05:50.776627 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:50.926060 master-0 kubenswrapper[7776]: I0219 03:05:50.925079 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access\") pod \"7905c351-2cbd-45b5-aa86-3b577ae11446\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " Feb 19 03:05:50.926060 master-0 kubenswrapper[7776]: I0219 03:05:50.925134 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock\") pod \"7905c351-2cbd-45b5-aa86-3b577ae11446\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " Feb 19 03:05:50.926060 master-0 kubenswrapper[7776]: I0219 03:05:50.925160 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir\") pod \"7905c351-2cbd-45b5-aa86-3b577ae11446\" (UID: \"7905c351-2cbd-45b5-aa86-3b577ae11446\") " Feb 19 03:05:50.926060 master-0 kubenswrapper[7776]: I0219 03:05:50.925437 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7905c351-2cbd-45b5-aa86-3b577ae11446" (UID: "7905c351-2cbd-45b5-aa86-3b577ae11446"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:50.926060 master-0 kubenswrapper[7776]: I0219 03:05:50.925880 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock" (OuterVolumeSpecName: "var-lock") pod "7905c351-2cbd-45b5-aa86-3b577ae11446" (UID: "7905c351-2cbd-45b5-aa86-3b577ae11446"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:05:50.928556 master-0 kubenswrapper[7776]: I0219 03:05:50.928519 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7905c351-2cbd-45b5-aa86-3b577ae11446" (UID: "7905c351-2cbd-45b5-aa86-3b577ae11446"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:05:50.936504 master-0 kubenswrapper[7776]: I0219 03:05:50.934831 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-clndn" Feb 19 03:05:50.980861 master-0 kubenswrapper[7776]: I0219 03:05:50.980498 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:05:50.999114 master-0 kubenswrapper[7776]: I0219 03:05:50.999067 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 19 03:05:50.999386 master-0 kubenswrapper[7776]: E0219 03:05:50.999361 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7905c351-2cbd-45b5-aa86-3b577ae11446" containerName="installer" Feb 19 03:05:50.999386 master-0 kubenswrapper[7776]: I0219 03:05:50.999382 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7905c351-2cbd-45b5-aa86-3b577ae11446" containerName="installer" Feb 19 03:05:50.999576 master-0 kubenswrapper[7776]: I0219 03:05:50.999555 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7905c351-2cbd-45b5-aa86-3b577ae11446" containerName="installer" Feb 19 03:05:51.000640 master-0 kubenswrapper[7776]: I0219 03:05:51.000316 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.011753 master-0 kubenswrapper[7776]: I0219 03:05:51.011679 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 19 03:05:51.026365 master-0 kubenswrapper[7776]: I0219 03:05:51.026321 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7905c351-2cbd-45b5-aa86-3b577ae11446-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:51.026365 master-0 kubenswrapper[7776]: I0219 03:05:51.026350 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:51.026365 master-0 kubenswrapper[7776]: I0219 03:05:51.026360 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7905c351-2cbd-45b5-aa86-3b577ae11446-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:05:51.127868 master-0 kubenswrapper[7776]: I0219 03:05:51.127823 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.128397 master-0 kubenswrapper[7776]: I0219 03:05:51.128106 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.129949 master-0 kubenswrapper[7776]: I0219 03:05:51.129872 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.231211 master-0 kubenswrapper[7776]: I0219 03:05:51.231085 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.231211 master-0 kubenswrapper[7776]: I0219 03:05:51.231147 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.231464 master-0 kubenswrapper[7776]: I0219 03:05:51.231284 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.231464 master-0 kubenswrapper[7776]: I0219 03:05:51.231397 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.231464 master-0 kubenswrapper[7776]: I0219 03:05:51.231455 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.249233 master-0 kubenswrapper[7776]: I0219 03:05:51.249165 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.325884 master-0 kubenswrapper[7776]: I0219 03:05:51.325832 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:05:51.643785 master-0 kubenswrapper[7776]: I0219 03:05:51.643751 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_7905c351-2cbd-45b5-aa86-3b577ae11446/installer/0.log" Feb 19 03:05:51.644246 master-0 kubenswrapper[7776]: I0219 03:05:51.643870 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"7905c351-2cbd-45b5-aa86-3b577ae11446","Type":"ContainerDied","Data":"e1dd9d048901893befe823f624b02370947b83bbbb3e55e0c34398ddbe1fbe88"} Feb 19 03:05:51.644246 master-0 kubenswrapper[7776]: I0219 03:05:51.643927 7776 scope.go:117] "RemoveContainer" containerID="3a855b547331c157da72e995ed266ebf91423c520f5d004348d6b6728172313e" Feb 19 03:05:51.644246 master-0 kubenswrapper[7776]: I0219 03:05:51.643935 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 19 03:05:51.674210 master-0 kubenswrapper[7776]: I0219 03:05:51.674163 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:51.676157 master-0 kubenswrapper[7776]: I0219 03:05:51.676116 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 19 03:05:51.705241 master-0 kubenswrapper[7776]: I0219 03:05:51.705197 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:51.705241 master-0 kubenswrapper[7776]: I0219 03:05:51.705310 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:51.714870 master-0 kubenswrapper[7776]: I0219 03:05:51.714829 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:51.849461 master-0 kubenswrapper[7776]: I0219 03:05:51.849420 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7905c351-2cbd-45b5-aa86-3b577ae11446" path="/var/lib/kubelet/pods/7905c351-2cbd-45b5-aa86-3b577ae11446/volumes" Feb 19 03:05:52.369733 master-0 kubenswrapper[7776]: I0219 03:05:52.363705 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:05:52.369733 master-0 kubenswrapper[7776]: I0219 03:05:52.363957 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="e66ac991-af58-490b-8909-e518d301e1b8" containerName="installer" containerID="cri-o://3b56052892bbdb6a0a707a252be41dcb545d08cbba6bf07e4772ca254f1c641d" gracePeriod=30 Feb 19 03:05:52.494790 master-0 kubenswrapper[7776]: I0219 03:05:52.494667 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:05:52.652363 master-0 kubenswrapper[7776]: I0219 03:05:52.652303 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:05:55.299211 master-0 kubenswrapper[7776]: I0219 03:05:55.299160 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 19 03:05:55.299756 master-0 kubenswrapper[7776]: I0219 03:05:55.299734 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.333451 master-0 kubenswrapper[7776]: I0219 03:05:55.333388 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.333661 master-0 kubenswrapper[7776]: I0219 03:05:55.333467 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.333661 master-0 kubenswrapper[7776]: I0219 03:05:55.333501 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.434737 master-0 kubenswrapper[7776]: I0219 03:05:55.434683 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.434950 master-0 kubenswrapper[7776]: I0219 03:05:55.434748 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.434950 master-0 kubenswrapper[7776]: I0219 03:05:55.434868 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.434950 master-0 kubenswrapper[7776]: I0219 03:05:55.434885 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.435069 master-0 kubenswrapper[7776]: I0219 03:05:55.435036 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.649450 master-0 kubenswrapper[7776]: I0219 03:05:55.649362 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 19 03:05:55.701745 master-0 kubenswrapper[7776]: I0219 03:05:55.698768 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.710291 master-0 kubenswrapper[7776]: I0219 03:05:55.706156 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5"] Feb 19 03:05:55.710291 master-0 kubenswrapper[7776]: I0219 03:05:55.706919 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.716665 master-0 kubenswrapper[7776]: I0219 03:05:55.713787 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 03:05:55.716665 master-0 kubenswrapper[7776]: I0219 03:05:55.714085 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 03:05:55.716665 master-0 kubenswrapper[7776]: I0219 03:05:55.714242 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 03:05:55.734292 master-0 kubenswrapper[7776]: I0219 03:05:55.733962 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5"] Feb 19 03:05:55.846205 master-0 kubenswrapper[7776]: I0219 03:05:55.843721 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.846205 master-0 kubenswrapper[7776]: I0219 03:05:55.844033 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99z6r\" (UniqueName: \"kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.915141 master-0 kubenswrapper[7776]: I0219 03:05:55.914955 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:05:55.947370 master-0 kubenswrapper[7776]: I0219 03:05:55.944727 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99z6r\" (UniqueName: \"kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.947370 master-0 kubenswrapper[7776]: I0219 03:05:55.945283 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.949662 master-0 kubenswrapper[7776]: I0219 03:05:55.949637 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:55.969210 master-0 kubenswrapper[7776]: I0219 03:05:55.969157 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99z6r\" (UniqueName: \"kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:56.106146 master-0 kubenswrapper[7776]: I0219 03:05:56.106080 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:05:56.597463 master-0 kubenswrapper[7776]: I0219 03:05:56.597358 7776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 19 03:05:56.598020 master-0 kubenswrapper[7776]: I0219 03:05:56.597579 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908" gracePeriod=30 Feb 19 03:05:56.598020 master-0 kubenswrapper[7776]: I0219 03:05:56.597631 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65" gracePeriod=30 Feb 19 03:05:56.599216 master-0 kubenswrapper[7776]: I0219 03:05:56.599174 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:05:56.599469 master-0 kubenswrapper[7776]: E0219 03:05:56.599437 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 19 03:05:56.599469 master-0 kubenswrapper[7776]: I0219 03:05:56.599453 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 19 03:05:56.599469 master-0 kubenswrapper[7776]: E0219 03:05:56.599467 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 19 03:05:56.599581 master-0 kubenswrapper[7776]: I0219 03:05:56.599474 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 19 03:05:56.599581 master-0 kubenswrapper[7776]: I0219 03:05:56.599573 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 19 03:05:56.599639 master-0 kubenswrapper[7776]: I0219 03:05:56.599591 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 19 03:05:56.602281 master-0 kubenswrapper[7776]: I0219 03:05:56.601106 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.676143 master-0 kubenswrapper[7776]: I0219 03:05:56.675726 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_aba1213d-8a7d-4b99-857f-b66578cc2bec/installer/0.log" Feb 19 03:05:56.676143 master-0 kubenswrapper[7776]: I0219 03:05:56.675782 7776 generic.go:334] "Generic (PLEG): container finished" podID="aba1213d-8a7d-4b99-857f-b66578cc2bec" containerID="107af6c10e19bdb483e86e7f412dc740d6234ce2a56a37c6f92ca7b36c798080" exitCode=1 Feb 19 03:05:56.676143 master-0 kubenswrapper[7776]: I0219 03:05:56.675822 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"aba1213d-8a7d-4b99-857f-b66578cc2bec","Type":"ContainerDied","Data":"107af6c10e19bdb483e86e7f412dc740d6234ce2a56a37c6f92ca7b36c798080"} Feb 19 03:05:56.755485 master-0 kubenswrapper[7776]: I0219 03:05:56.755429 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.755810 master-0 kubenswrapper[7776]: I0219 03:05:56.755515 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.755810 master-0 kubenswrapper[7776]: I0219 03:05:56.755535 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.755810 master-0 kubenswrapper[7776]: I0219 03:05:56.755599 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.755810 master-0 kubenswrapper[7776]: I0219 03:05:56.755645 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.755810 master-0 kubenswrapper[7776]: I0219 03:05:56.755670 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857157 master-0 kubenswrapper[7776]: I0219 03:05:56.857040 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857157 master-0 kubenswrapper[7776]: I0219 03:05:56.857102 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857157 master-0 kubenswrapper[7776]: I0219 03:05:56.857131 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857157 master-0 kubenswrapper[7776]: I0219 03:05:56.857156 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857462 master-0 kubenswrapper[7776]: I0219 03:05:56.857198 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857462 master-0 kubenswrapper[7776]: I0219 03:05:56.857224 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857462 master-0 kubenswrapper[7776]: I0219 03:05:56.857323 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857462 master-0 kubenswrapper[7776]: I0219 03:05:56.857339 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857462 master-0 kubenswrapper[7776]: I0219 03:05:56.857427 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857607 master-0 kubenswrapper[7776]: I0219 03:05:56.857506 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857607 master-0 kubenswrapper[7776]: I0219 03:05:56.857562 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:05:56.857607 master-0 kubenswrapper[7776]: I0219 03:05:56.857586 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:06:04.413724 master-0 kubenswrapper[7776]: I0219 03:06:04.413668 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_aba1213d-8a7d-4b99-857f-b66578cc2bec/installer/0.log" Feb 19 03:06:04.413724 master-0 kubenswrapper[7776]: I0219 03:06:04.413731 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.482238 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock\") pod \"aba1213d-8a7d-4b99-857f-b66578cc2bec\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.482352 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access\") pod \"aba1213d-8a7d-4b99-857f-b66578cc2bec\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.482390 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir\") pod \"aba1213d-8a7d-4b99-857f-b66578cc2bec\" (UID: \"aba1213d-8a7d-4b99-857f-b66578cc2bec\") " Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.482580 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aba1213d-8a7d-4b99-857f-b66578cc2bec" (UID: "aba1213d-8a7d-4b99-857f-b66578cc2bec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.482614 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock" (OuterVolumeSpecName: "var-lock") pod "aba1213d-8a7d-4b99-857f-b66578cc2bec" (UID: "aba1213d-8a7d-4b99-857f-b66578cc2bec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:04.486961 master-0 kubenswrapper[7776]: I0219 03:06:04.486109 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aba1213d-8a7d-4b99-857f-b66578cc2bec" (UID: "aba1213d-8a7d-4b99-857f-b66578cc2bec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:06:04.584353 master-0 kubenswrapper[7776]: I0219 03:06:04.584312 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba1213d-8a7d-4b99-857f-b66578cc2bec-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:04.584353 master-0 kubenswrapper[7776]: I0219 03:06:04.584349 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:04.584895 master-0 kubenswrapper[7776]: I0219 03:06:04.584361 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba1213d-8a7d-4b99-857f-b66578cc2bec-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:04.730202 master-0 kubenswrapper[7776]: I0219 03:06:04.730163 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerStarted","Data":"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570"} Feb 19 03:06:04.733317 master-0 kubenswrapper[7776]: I0219 03:06:04.733275 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerStarted","Data":"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f"} Feb 19 03:06:04.736078 master-0 kubenswrapper[7776]: I0219 03:06:04.736054 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" event={"ID":"92b9ea7b-01b1-48f8-a392-12200f55502e","Type":"ContainerStarted","Data":"476fc086e4c133ead58fc958b5e8c61b6a7e9e1ccc96dcde9038878f8f7dbc2a"} Feb 19 03:06:04.736430 master-0 kubenswrapper[7776]: I0219 03:06:04.736410 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:06:04.737651 master-0 kubenswrapper[7776]: I0219 03:06:04.737619 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"aba1213d-8a7d-4b99-857f-b66578cc2bec","Type":"ContainerDied","Data":"1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647"} Feb 19 03:06:04.737651 master-0 kubenswrapper[7776]: I0219 03:06:04.737650 7776 scope.go:117] "RemoveContainer" containerID="107af6c10e19bdb483e86e7f412dc740d6234ce2a56a37c6f92ca7b36c798080" Feb 19 03:06:04.737768 master-0 kubenswrapper[7776]: I0219 03:06:04.737734 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 19 03:06:04.750535 master-0 kubenswrapper[7776]: I0219 03:06:04.750499 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:06:05.744894 master-0 kubenswrapper[7776]: I0219 03:06:05.744851 7776 generic.go:334] "Generic (PLEG): container finished" podID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerID="9e1f925dcef405e11a0cd39d3f095f51ec32450e7b276a65d93d2396c9594fa0" exitCode=0 Feb 19 03:06:05.745625 master-0 kubenswrapper[7776]: I0219 03:06:05.744922 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerDied","Data":"9e1f925dcef405e11a0cd39d3f095f51ec32450e7b276a65d93d2396c9594fa0"} Feb 19 03:06:05.747164 master-0 kubenswrapper[7776]: I0219 03:06:05.747136 7776 generic.go:334] "Generic (PLEG): container finished" podID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerID="e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570" exitCode=0 Feb 19 03:06:05.747226 master-0 kubenswrapper[7776]: I0219 03:06:05.747188 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerDied","Data":"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570"} Feb 19 03:06:05.749506 master-0 kubenswrapper[7776]: I0219 03:06:05.749439 7776 generic.go:334] "Generic (PLEG): container finished" podID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerID="6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f" exitCode=0 Feb 19 03:06:05.749506 master-0 kubenswrapper[7776]: I0219 03:06:05.749485 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerDied","Data":"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f"} Feb 19 03:06:05.752205 master-0 kubenswrapper[7776]: I0219 03:06:05.751760 7776 generic.go:334] "Generic (PLEG): container finished" podID="543aef8d-960a-42c9-b1fd-954e2d024002" containerID="02582b4f63c227af0cc551dd11287a8d643da3ea742ef92c54cd33d3e54ef1b5" exitCode=0 Feb 19 03:06:05.752205 master-0 kubenswrapper[7776]: I0219 03:06:05.751819 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerDied","Data":"02582b4f63c227af0cc551dd11287a8d643da3ea742ef92c54cd33d3e54ef1b5"} Feb 19 03:06:06.764230 master-0 kubenswrapper[7776]: I0219 03:06:06.761499 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerStarted","Data":"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb"} Feb 19 03:06:06.765756 master-0 kubenswrapper[7776]: I0219 03:06:06.765137 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerStarted","Data":"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87"} Feb 19 03:06:06.771176 master-0 kubenswrapper[7776]: I0219 03:06:06.771109 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerStarted","Data":"d78e62e78b262908533db4b07e0adc537376985d3006aaed9e0ce93af55f76bd"} Feb 19 03:06:06.775964 master-0 kubenswrapper[7776]: I0219 03:06:06.775921 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerStarted","Data":"9dbf591511f3015176a71f524f75ea44459f2fd46e24cc64eacbcb84c285b728"} Feb 19 03:06:09.643341 master-0 kubenswrapper[7776]: E0219 03:06:09.643276 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:06:09.643819 master-0 kubenswrapper[7776]: I0219 03:06:09.643760 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:06:09.791294 master-0 kubenswrapper[7776]: I0219 03:06:09.791220 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"b742134104d8080127ff1f1b424a2336927e53805871d3b253d9ad24fd88958f"} Feb 19 03:06:09.983489 master-0 kubenswrapper[7776]: I0219 03:06:09.983421 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:06:09.983489 master-0 kubenswrapper[7776]: I0219 03:06:09.983492 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:06:10.024399 master-0 kubenswrapper[7776]: I0219 03:06:10.024315 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:06:10.148189 master-0 kubenswrapper[7776]: E0219 03:06:10.148131 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podaba1213d_8a7d_4b99_857f_b66578cc2bec.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podaba1213d_8a7d_4b99_857f_b66578cc2bec.slice/crio-1c0ee9ea7613e543246e347d2032c6c3b7f0ce179d5a2a853d69dd4c46853647\": RecentStats: unable to find data in memory cache]" Feb 19 03:06:10.661860 master-0 kubenswrapper[7776]: I0219 03:06:10.661773 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:06:10.662638 master-0 kubenswrapper[7776]: I0219 03:06:10.662055 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:06:10.805654 master-0 kubenswrapper[7776]: I0219 03:06:10.805505 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"0b7862f5c14abc35c6d3864be1a69aaa8c3ca56dfc67a222771c4ef72c815739"} Feb 19 03:06:10.807326 master-0 kubenswrapper[7776]: I0219 03:06:10.807265 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e66ac991-af58-490b-8909-e518d301e1b8/installer/0.log" Feb 19 03:06:10.807326 master-0 kubenswrapper[7776]: I0219 03:06:10.807320 7776 generic.go:334] "Generic (PLEG): container finished" podID="e66ac991-af58-490b-8909-e518d301e1b8" containerID="3b56052892bbdb6a0a707a252be41dcb545d08cbba6bf07e4772ca254f1c641d" exitCode=1 Feb 19 03:06:10.807530 master-0 kubenswrapper[7776]: I0219 03:06:10.807442 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e66ac991-af58-490b-8909-e518d301e1b8","Type":"ContainerDied","Data":"3b56052892bbdb6a0a707a252be41dcb545d08cbba6bf07e4772ca254f1c641d"} Feb 19 03:06:11.306351 master-0 kubenswrapper[7776]: I0219 03:06:11.306314 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e66ac991-af58-490b-8909-e518d301e1b8/installer/0.log" Feb 19 03:06:11.306680 master-0 kubenswrapper[7776]: I0219 03:06:11.306379 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:06:11.480766 master-0 kubenswrapper[7776]: I0219 03:06:11.480682 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir\") pod \"e66ac991-af58-490b-8909-e518d301e1b8\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " Feb 19 03:06:11.481019 master-0 kubenswrapper[7776]: I0219 03:06:11.480802 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access\") pod \"e66ac991-af58-490b-8909-e518d301e1b8\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " Feb 19 03:06:11.481019 master-0 kubenswrapper[7776]: I0219 03:06:11.480859 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e66ac991-af58-490b-8909-e518d301e1b8" (UID: "e66ac991-af58-490b-8909-e518d301e1b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:11.481019 master-0 kubenswrapper[7776]: I0219 03:06:11.480873 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock\") pod \"e66ac991-af58-490b-8909-e518d301e1b8\" (UID: \"e66ac991-af58-490b-8909-e518d301e1b8\") " Feb 19 03:06:11.481019 master-0 kubenswrapper[7776]: I0219 03:06:11.480910 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "e66ac991-af58-490b-8909-e518d301e1b8" (UID: "e66ac991-af58-490b-8909-e518d301e1b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:11.481387 master-0 kubenswrapper[7776]: I0219 03:06:11.481338 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:11.481453 master-0 kubenswrapper[7776]: I0219 03:06:11.481392 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e66ac991-af58-490b-8909-e518d301e1b8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:11.487360 master-0 kubenswrapper[7776]: I0219 03:06:11.485350 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e66ac991-af58-490b-8909-e518d301e1b8" (UID: "e66ac991-af58-490b-8909-e518d301e1b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:06:11.583333 master-0 kubenswrapper[7776]: I0219 03:06:11.583123 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e66ac991-af58-490b-8909-e518d301e1b8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:11.708896 master-0 kubenswrapper[7776]: I0219 03:06:11.708819 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spsn7" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="registry-server" probeResult="failure" output=< Feb 19 03:06:11.708896 master-0 kubenswrapper[7776]: timeout: failed to connect service ":50051" within 1s Feb 19 03:06:11.708896 master-0 kubenswrapper[7776]: > Feb 19 03:06:11.814594 master-0 kubenswrapper[7776]: I0219 03:06:11.814510 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6" exitCode=1 Feb 19 03:06:11.814594 master-0 kubenswrapper[7776]: I0219 03:06:11.814596 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6"} Feb 19 03:06:11.815209 master-0 kubenswrapper[7776]: I0219 03:06:11.815171 7776 scope.go:117] "RemoveContainer" containerID="f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6" Feb 19 03:06:11.816059 master-0 kubenswrapper[7776]: I0219 03:06:11.816026 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="0b7862f5c14abc35c6d3864be1a69aaa8c3ca56dfc67a222771c4ef72c815739" exitCode=0 Feb 19 03:06:11.816117 master-0 kubenswrapper[7776]: I0219 03:06:11.816073 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"0b7862f5c14abc35c6d3864be1a69aaa8c3ca56dfc67a222771c4ef72c815739"} Feb 19 03:06:11.817763 master-0 kubenswrapper[7776]: I0219 03:06:11.817714 7776 generic.go:334] "Generic (PLEG): container finished" podID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerID="32be5e8b93330dd04d423a1444137191a10ffbf90c7167cd6baa0a0571479517" exitCode=0 Feb 19 03:06:11.817841 master-0 kubenswrapper[7776]: I0219 03:06:11.817805 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2561caa0-5f79-496e-8fa7-a9692dca20be","Type":"ContainerDied","Data":"32be5e8b93330dd04d423a1444137191a10ffbf90c7167cd6baa0a0571479517"} Feb 19 03:06:11.819656 master-0 kubenswrapper[7776]: I0219 03:06:11.819615 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e66ac991-af58-490b-8909-e518d301e1b8/installer/0.log" Feb 19 03:06:11.819729 master-0 kubenswrapper[7776]: I0219 03:06:11.819683 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e66ac991-af58-490b-8909-e518d301e1b8","Type":"ContainerDied","Data":"efc170236b8ec5f3ee868c2762adf6da88d245375479a3b8c7878aa313bac925"} Feb 19 03:06:11.819729 master-0 kubenswrapper[7776]: I0219 03:06:11.819727 7776 scope.go:117] "RemoveContainer" containerID="3b56052892bbdb6a0a707a252be41dcb545d08cbba6bf07e4772ca254f1c641d" Feb 19 03:06:11.819792 master-0 kubenswrapper[7776]: I0219 03:06:11.819753 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 19 03:06:12.095823 master-0 kubenswrapper[7776]: I0219 03:06:12.095684 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:06:12.095823 master-0 kubenswrapper[7776]: I0219 03:06:12.095764 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:06:12.138199 master-0 kubenswrapper[7776]: I0219 03:06:12.138146 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:06:12.829032 master-0 kubenswrapper[7776]: I0219 03:06:12.828833 7776 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff" exitCode=1 Feb 19 03:06:12.829032 master-0 kubenswrapper[7776]: I0219 03:06:12.828884 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff"} Feb 19 03:06:12.829902 master-0 kubenswrapper[7776]: I0219 03:06:12.829632 7776 scope.go:117] "RemoveContainer" containerID="c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff" Feb 19 03:06:12.833190 master-0 kubenswrapper[7776]: I0219 03:06:12.833111 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24"} Feb 19 03:06:12.884089 master-0 kubenswrapper[7776]: I0219 03:06:12.884023 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:06:13.157852 master-0 kubenswrapper[7776]: I0219 03:06:13.157789 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:06:13.158322 master-0 kubenswrapper[7776]: I0219 03:06:13.158293 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:06:13.207219 master-0 kubenswrapper[7776]: I0219 03:06:13.207167 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:06:13.212135 master-0 kubenswrapper[7776]: I0219 03:06:13.212103 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 19 03:06:13.317954 master-0 kubenswrapper[7776]: I0219 03:06:13.317894 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir\") pod \"2561caa0-5f79-496e-8fa7-a9692dca20be\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " Feb 19 03:06:13.317954 master-0 kubenswrapper[7776]: I0219 03:06:13.317959 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock\") pod \"2561caa0-5f79-496e-8fa7-a9692dca20be\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " Feb 19 03:06:13.318226 master-0 kubenswrapper[7776]: I0219 03:06:13.318031 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access\") pod \"2561caa0-5f79-496e-8fa7-a9692dca20be\" (UID: \"2561caa0-5f79-496e-8fa7-a9692dca20be\") " Feb 19 03:06:13.318797 master-0 kubenswrapper[7776]: I0219 03:06:13.318750 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2561caa0-5f79-496e-8fa7-a9692dca20be" (UID: "2561caa0-5f79-496e-8fa7-a9692dca20be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:13.318797 master-0 kubenswrapper[7776]: I0219 03:06:13.318791 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock" (OuterVolumeSpecName: "var-lock") pod "2561caa0-5f79-496e-8fa7-a9692dca20be" (UID: "2561caa0-5f79-496e-8fa7-a9692dca20be"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:13.321543 master-0 kubenswrapper[7776]: I0219 03:06:13.321505 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2561caa0-5f79-496e-8fa7-a9692dca20be" (UID: "2561caa0-5f79-496e-8fa7-a9692dca20be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:06:13.419651 master-0 kubenswrapper[7776]: I0219 03:06:13.419570 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2561caa0-5f79-496e-8fa7-a9692dca20be-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:13.419651 master-0 kubenswrapper[7776]: I0219 03:06:13.419616 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:13.419651 master-0 kubenswrapper[7776]: I0219 03:06:13.419635 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2561caa0-5f79-496e-8fa7-a9692dca20be-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:13.842710 master-0 kubenswrapper[7776]: I0219 03:06:13.842360 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 19 03:06:13.857458 master-0 kubenswrapper[7776]: I0219 03:06:13.857386 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2561caa0-5f79-496e-8fa7-a9692dca20be","Type":"ContainerDied","Data":"d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692"} Feb 19 03:06:13.857458 master-0 kubenswrapper[7776]: I0219 03:06:13.857434 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692" Feb 19 03:06:13.858279 master-0 kubenswrapper[7776]: I0219 03:06:13.858223 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd"} Feb 19 03:06:13.909040 master-0 kubenswrapper[7776]: I0219 03:06:13.908985 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:06:15.720527 master-0 kubenswrapper[7776]: E0219 03:06:15.720358 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:06:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:06:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:06:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:06:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126\\\"],\\\"sizeBytes\\\":438548891},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:16.059634 master-0 kubenswrapper[7776]: E0219 03:06:16.059395 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:16.916751 master-0 kubenswrapper[7776]: I0219 03:06:16.916685 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:06:19.585553 master-0 kubenswrapper[7776]: I0219 03:06:19.585471 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:06:20.046232 master-0 kubenswrapper[7776]: I0219 03:06:20.046144 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:06:20.729270 master-0 kubenswrapper[7776]: I0219 03:06:20.729207 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:06:20.770945 master-0 kubenswrapper[7776]: I0219 03:06:20.770900 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:06:21.402038 master-0 kubenswrapper[7776]: I0219 03:06:21.401951 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:06:23.812504 master-0 kubenswrapper[7776]: E0219 03:06:23.812409 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:06:23.911877 master-0 kubenswrapper[7776]: I0219 03:06:23.911697 7776 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65" exitCode=0 Feb 19 03:06:24.403014 master-0 kubenswrapper[7776]: I0219 03:06:24.402902 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:24.922129 master-0 kubenswrapper[7776]: I0219 03:06:24.922048 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:06:24.922129 master-0 kubenswrapper[7776]: I0219 03:06:24.922129 7776 generic.go:334] "Generic (PLEG): container finished" podID="05c9cb4a-5249-4116-a2e5-caa7859e2075" containerID="1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed" exitCode=1 Feb 19 03:06:24.923208 master-0 kubenswrapper[7776]: I0219 03:06:24.922173 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerDied","Data":"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed"} Feb 19 03:06:24.923208 master-0 kubenswrapper[7776]: I0219 03:06:24.922866 7776 scope.go:117] "RemoveContainer" containerID="1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed" Feb 19 03:06:25.721552 master-0 kubenswrapper[7776]: E0219 03:06:25.721460 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:25.930168 master-0 kubenswrapper[7776]: I0219 03:06:25.930105 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:06:25.930711 master-0 kubenswrapper[7776]: I0219 03:06:25.930195 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45"} Feb 19 03:06:26.060664 master-0 kubenswrapper[7776]: E0219 03:06:26.060368 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:26.740236 master-0 kubenswrapper[7776]: I0219 03:06:26.740192 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 19 03:06:26.740620 master-0 kubenswrapper[7776]: I0219 03:06:26.740594 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:06:26.896719 master-0 kubenswrapper[7776]: I0219 03:06:26.896673 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 19 03:06:26.897051 master-0 kubenswrapper[7776]: I0219 03:06:26.896753 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:26.897524 master-0 kubenswrapper[7776]: I0219 03:06:26.897483 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 19 03:06:26.897667 master-0 kubenswrapper[7776]: I0219 03:06:26.897623 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:06:26.897879 master-0 kubenswrapper[7776]: I0219 03:06:26.897846 7776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:26.897969 master-0 kubenswrapper[7776]: I0219 03:06:26.897880 7776 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:06:26.938231 master-0 kubenswrapper[7776]: I0219 03:06:26.938159 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 19 03:06:26.938231 master-0 kubenswrapper[7776]: I0219 03:06:26.938221 7776 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908" exitCode=137 Feb 19 03:06:26.938941 master-0 kubenswrapper[7776]: I0219 03:06:26.938303 7776 scope.go:117] "RemoveContainer" containerID="14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65" Feb 19 03:06:26.938941 master-0 kubenswrapper[7776]: I0219 03:06:26.938333 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:06:26.952752 master-0 kubenswrapper[7776]: I0219 03:06:26.952722 7776 scope.go:117] "RemoveContainer" containerID="04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908" Feb 19 03:06:26.965616 master-0 kubenswrapper[7776]: I0219 03:06:26.965520 7776 scope.go:117] "RemoveContainer" containerID="14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65" Feb 19 03:06:26.966057 master-0 kubenswrapper[7776]: E0219 03:06:26.965973 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65\": container with ID starting with 14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65 not found: ID does not exist" containerID="14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65" Feb 19 03:06:26.966153 master-0 kubenswrapper[7776]: I0219 03:06:26.966056 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65"} err="failed to get container status \"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65\": rpc error: code = NotFound desc = could not find container \"14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65\": container with ID starting with 14bf8d292aa9af0068948b4d45982ab918480bbc5fedca98140ea90e17c3ef65 not found: ID does not exist" Feb 19 03:06:26.966153 master-0 kubenswrapper[7776]: I0219 03:06:26.966091 7776 scope.go:117] "RemoveContainer" containerID="04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908" Feb 19 03:06:26.966613 master-0 kubenswrapper[7776]: E0219 03:06:26.966556 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908\": container with ID starting with 04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908 not found: ID does not exist" containerID="04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908" Feb 19 03:06:26.966688 master-0 kubenswrapper[7776]: I0219 03:06:26.966612 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908"} err="failed to get container status \"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908\": rpc error: code = NotFound desc = could not find container \"04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908\": container with ID starting with 04264120a0d805892e203a64f0ea75384f3abfe8611d5edf7837f55be909e908 not found: ID does not exist" Feb 19 03:06:27.848873 master-0 kubenswrapper[7776]: I0219 03:06:27.848811 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 19 03:06:27.849212 master-0 kubenswrapper[7776]: I0219 03:06:27.849188 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:06:30.609897 master-0 kubenswrapper[7776]: E0219 03:06:30.609677 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189586e7572280e7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:05:56.597604583 +0000 UTC m=+62.937289101,LastTimestamp:2026-02-19 03:05:56.597604583 +0000 UTC m=+62.937289101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:06:34.403227 master-0 kubenswrapper[7776]: I0219 03:06:34.403111 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:35.297484 master-0 kubenswrapper[7776]: I0219 03:06:35.297381 7776 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-r7r6p container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 19 03:06:35.297484 master-0 kubenswrapper[7776]: I0219 03:06:35.297456 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 19 03:06:35.721847 master-0 kubenswrapper[7776]: E0219 03:06:35.721714 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 19 03:06:36.061391 master-0 kubenswrapper[7776]: E0219 03:06:36.061139 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:36.920897 master-0 kubenswrapper[7776]: E0219 03:06:36.920790 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:06:38.000280 master-0 kubenswrapper[7776]: I0219 03:06:38.000160 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="baf56418e5f8bfbb1b0b3b62a17157021582596ac9b77253725abcedbc9830bb" exitCode=0 Feb 19 03:06:39.006861 master-0 kubenswrapper[7776]: I0219 03:06:39.006775 7776 generic.go:334] "Generic (PLEG): container finished" podID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" containerID="24791f1c363b144877c645c4f1432f887b6ed95f1fe6b262a78611e4e7415851" exitCode=0 Feb 19 03:06:44.402786 master-0 kubenswrapper[7776]: I0219 03:06:44.402679 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:45.722740 master-0 kubenswrapper[7776]: E0219 03:06:45.722623 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:46.061932 master-0 kubenswrapper[7776]: E0219 03:06:46.061767 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:50.063544 master-0 kubenswrapper[7776]: I0219 03:06:50.063462 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/0.log" Feb 19 03:06:50.063544 master-0 kubenswrapper[7776]: I0219 03:06:50.063517 7776 generic.go:334] "Generic (PLEG): container finished" podID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" containerID="8b3bceeaced74d609ab5cae3f8bcf4b942c0f6e35aacd59b863ae5c7bc32a8c0" exitCode=255 Feb 19 03:06:52.078207 master-0 kubenswrapper[7776]: I0219 03:06:52.078156 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/0.log" Feb 19 03:06:52.079093 master-0 kubenswrapper[7776]: I0219 03:06:52.079058 7776 generic.go:334] "Generic (PLEG): container finished" podID="a52be87c-e707-4269-96da-537708d52b64" containerID="f6706a38252937f6734b664a0f078763a45b428cf03e52f78ca141868385452d" exitCode=1 Feb 19 03:06:54.090136 master-0 kubenswrapper[7776]: I0219 03:06:54.090056 7776 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="86c664ab293aa817dc19559e0b69114daede98d8ba6acf0a72b18f40ca2b5774" exitCode=0 Feb 19 03:06:55.097157 master-0 kubenswrapper[7776]: I0219 03:06:55.097077 7776 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb" exitCode=0 Feb 19 03:06:55.723989 master-0 kubenswrapper[7776]: E0219 03:06:55.723837 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:55.723989 master-0 kubenswrapper[7776]: E0219 03:06:55.723931 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:06:56.063230 master-0 kubenswrapper[7776]: E0219 03:06:56.063090 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:06:56.063230 master-0 kubenswrapper[7776]: I0219 03:06:56.063161 7776 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 03:06:59.118812 master-0 kubenswrapper[7776]: I0219 03:06:59.118722 7776 generic.go:334] "Generic (PLEG): container finished" podID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" containerID="5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2" exitCode=0 Feb 19 03:07:01.852493 master-0 kubenswrapper[7776]: E0219 03:07:01.852429 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:07:01.853684 master-0 kubenswrapper[7776]: E0219 03:07:01.852634 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Feb 19 03:07:01.855329 master-0 kubenswrapper[7776]: I0219 03:07:01.855222 7776 scope.go:117] "RemoveContainer" containerID="f6706a38252937f6734b664a0f078763a45b428cf03e52f78ca141868385452d" Feb 19 03:07:01.861855 master-0 kubenswrapper[7776]: I0219 03:07:01.861785 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:07:02.135415 master-0 kubenswrapper[7776]: I0219 03:07:02.135323 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/0.log" Feb 19 03:07:04.149165 master-0 kubenswrapper[7776]: I0219 03:07:04.149062 7776 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc" exitCode=0 Feb 19 03:07:04.415427 master-0 kubenswrapper[7776]: I0219 03:07:04.415308 7776 status_manager.go:851] "Failed to get status for pod" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 19 03:07:04.612552 master-0 kubenswrapper[7776]: E0219 03:07:04.612413 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{controller-manager-7d4cccb57c-sfb9j.189586e926d4cb29 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-7d4cccb57c-sfb9j,UID:92b9ea7b-01b1-48f8-a392-12200f55502e,APIVersion:v1,ResourceVersion:7358,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\" in 18.568s (18.568s including waiting). Image size: 558105176 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.377140009 +0000 UTC m=+70.716824567,LastTimestamp:2026-02-19 03:06:04.377140009 +0000 UTC m=+70.716824567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:07:04.927751 master-0 kubenswrapper[7776]: E0219 03:07:04.927700 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:07:04.927751 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0" Netns:"/var/run/netns/e1d1a539-f255-4bbd-b242-4023794701d2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:04.927751 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:04.927751 master-0 kubenswrapper[7776]: > Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: E0219 03:07:04.927790 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0" Netns:"/var/run/netns/e1d1a539-f255-4bbd-b242-4023794701d2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: > pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: E0219 03:07:04.927817 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0" Netns:"/var/run/netns/e1d1a539-f255-4bbd-b242-4023794701d2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: > pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:07:04.927933 master-0 kubenswrapper[7776]: E0219 03:07:04.927893 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-4-master-0_openshift-kube-scheduler(66b05aeb-22a8-4008-a582-072f63cc46bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-4-master-0_openshift-kube-scheduler(66b05aeb-22a8-4008-a582-072f63cc46bf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0\\\" Netns:\\\"/var/run/netns/e1d1a539-f255-4bbd-b242-4023794701d2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-4-master-0" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" Feb 19 03:07:05.154492 master-0 kubenswrapper[7776]: I0219 03:07:05.154440 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:07:05.155093 master-0 kubenswrapper[7776]: I0219 03:07:05.154908 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: E0219 03:07:05.176153 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97" Netns:"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: > Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: E0219 03:07:05.176225 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97" Netns:"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: > pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: E0219 03:07:05.176245 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97" Netns:"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: > pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:07:05.182436 master-0 kubenswrapper[7776]: E0219 03:07:05.176320 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api(0664d88f-f697-4182-93cd-f208ff6f3ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api(0664d88f-f697-4182-93cd-f208ff6f3ac2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97\\\" Netns:\\\"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" podUID="0664d88f-f697-4182-93cd-f208ff6f3ac2" Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: E0219 03:07:05.341874 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb" Netns:"/var/run/netns/51f863ae-bb6e-4150-afb9-8d7418b17979" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: > Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: E0219 03:07:05.341943 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb" Netns:"/var/run/netns/51f863ae-bb6e-4150-afb9-8d7418b17979" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.341941 master-0 kubenswrapper[7776]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:07:05.342343 master-0 kubenswrapper[7776]: E0219 03:07:05.341962 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:07:05.342343 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb" Netns:"/var/run/netns/51f863ae-bb6e-4150-afb9-8d7418b17979" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.342343 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.342343 master-0 kubenswrapper[7776]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:07:05.342343 master-0 kubenswrapper[7776]: E0219 03:07:05.342016 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb\\\" Netns:\\\"/var/run/netns/51f863ae-bb6e-4150-afb9-8d7418b17979\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Feb 19 03:07:05.359029 master-0 kubenswrapper[7776]: E0219 03:07:05.358949 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:07:05.359029 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415" Netns:"/var/run/netns/b1b09fe3-323f-484f-a83d-102558ae899f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.359029 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.359029 master-0 kubenswrapper[7776]: > Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: E0219 03:07:05.359059 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415" Netns:"/var/run/netns/b1b09fe3-323f-484f-a83d-102558ae899f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: E0219 03:07:05.359097 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415" Netns:"/var/run/netns/b1b09fe3-323f-484f-a83d-102558ae899f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:07:05.359189 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:07:05.359418 master-0 kubenswrapper[7776]: E0219 03:07:05.359189 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-master-0_openshift-kube-apiserver(1bddb3a1-41bd-4314-bfb0-3c72ca14200f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-master-0_openshift-kube-apiserver(1bddb3a1-41bd-4314-bfb0-3c72ca14200f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415\\\" Netns:\\\"/var/run/netns/b1b09fe3-323f-484f-a83d-102558ae899f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-master-0" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Feb 19 03:07:06.063556 master-0 kubenswrapper[7776]: E0219 03:07:06.063461 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 19 03:07:06.160525 master-0 kubenswrapper[7776]: I0219 03:07:06.160459 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:07:06.161323 master-0 kubenswrapper[7776]: I0219 03:07:06.160561 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:07:06.161323 master-0 kubenswrapper[7776]: I0219 03:07:06.160474 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:07:06.161323 master-0 kubenswrapper[7776]: I0219 03:07:06.160927 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:07:06.161323 master-0 kubenswrapper[7776]: I0219 03:07:06.161021 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:07:06.161323 master-0 kubenswrapper[7776]: I0219 03:07:06.161214 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:07:14.236527 master-0 kubenswrapper[7776]: I0219 03:07:14.236306 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24" exitCode=1 Feb 19 03:07:15.964755 master-0 kubenswrapper[7776]: E0219 03:07:15.964567 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:07:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:07:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:07:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:07:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:07:16.266040 master-0 kubenswrapper[7776]: E0219 03:07:16.265809 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 19 03:07:25.965866 master-0 kubenswrapper[7776]: E0219 03:07:25.965782 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:07:26.306690 master-0 kubenswrapper[7776]: I0219 03:07:26.306564 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/0.log" Feb 19 03:07:26.306690 master-0 kubenswrapper[7776]: I0219 03:07:26.306606 7776 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="48896fb51d13a46ede8e9679a55d5198adfa5eeb4a91ae305507c9b4bf39a65b" exitCode=1 Feb 19 03:07:26.668540 master-0 kubenswrapper[7776]: E0219 03:07:26.668391 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 19 03:07:35.296867 master-0 kubenswrapper[7776]: I0219 03:07:35.296752 7776 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-r7r6p container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 19 03:07:35.297483 master-0 kubenswrapper[7776]: I0219 03:07:35.296897 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 19 03:07:35.865101 master-0 kubenswrapper[7776]: E0219 03:07:35.864993 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:07:35.865450 master-0 kubenswrapper[7776]: E0219 03:07:35.865243 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Feb 19 03:07:35.877582 master-0 kubenswrapper[7776]: I0219 03:07:35.877473 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:07:35.967218 master-0 kubenswrapper[7776]: E0219 03:07:35.967079 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:07:37.470989 master-0 kubenswrapper[7776]: E0219 03:07:37.470568 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 19 03:07:38.615417 master-0 kubenswrapper[7776]: E0219 03:07:38.615135 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-2cczk.189586e92c7cf2f3 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-2cczk,UID:30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0,APIVersion:v1,ResourceVersion:7289,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 19.889s (19.889s including waiting). Image size: 1210130107 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.472046323 +0000 UTC m=+70.811730841,LastTimestamp:2026-02-19 03:06:04.472046323 +0000 UTC m=+70.811730841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:07:44.404012 master-0 kubenswrapper[7776]: I0219 03:07:44.403898 7776 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced" exitCode=0 Feb 19 03:07:45.967687 master-0 kubenswrapper[7776]: E0219 03:07:45.967597 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:07:46.901021 master-0 kubenswrapper[7776]: I0219 03:07:46.900951 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:07:46.901282 master-0 kubenswrapper[7776]: I0219 03:07:46.901039 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:07:46.901371 master-0 kubenswrapper[7776]: I0219 03:07:46.901327 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:07:46.901525 master-0 kubenswrapper[7776]: I0219 03:07:46.901432 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:07:49.072035 master-0 kubenswrapper[7776]: E0219 03:07:49.071931 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 19 03:07:55.464456 master-0 kubenswrapper[7776]: I0219 03:07:55.464362 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/0.log" Feb 19 03:07:55.465717 master-0 kubenswrapper[7776]: I0219 03:07:55.465667 7776 generic.go:334] "Generic (PLEG): container finished" podID="7012676e-f35d-46e5-83e8-a63172dd076e" containerID="63378086041fcb0de956f1a5a160faad6c0e85b100c25eacbce569a26a79079c" exitCode=1 Feb 19 03:07:55.968447 master-0 kubenswrapper[7776]: E0219 03:07:55.968302 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:07:55.968447 master-0 kubenswrapper[7776]: E0219 03:07:55.968405 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:07:56.476475 master-0 kubenswrapper[7776]: I0219 03:07:56.476416 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/0.log" Feb 19 03:07:56.476475 master-0 kubenswrapper[7776]: I0219 03:07:56.476469 7776 generic.go:334] "Generic (PLEG): container finished" podID="8f7d8fc8-c313-416f-b62b-b54db9944066" containerID="63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5" exitCode=1 Feb 19 03:07:56.478208 master-0 kubenswrapper[7776]: I0219 03:07:56.478163 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/1.log" Feb 19 03:07:56.479214 master-0 kubenswrapper[7776]: I0219 03:07:56.479165 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:07:56.479332 master-0 kubenswrapper[7776]: I0219 03:07:56.479233 7776 generic.go:334] "Generic (PLEG): container finished" podID="05c9cb4a-5249-4116-a2e5-caa7859e2075" containerID="f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45" exitCode=255 Feb 19 03:07:56.900609 master-0 kubenswrapper[7776]: I0219 03:07:56.900488 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:07:56.900609 master-0 kubenswrapper[7776]: I0219 03:07:56.900497 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:07:56.900609 master-0 kubenswrapper[7776]: I0219 03:07:56.900571 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:07:56.900609 master-0 kubenswrapper[7776]: I0219 03:07:56.900568 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:08:00.969316 master-0 kubenswrapper[7776]: I0219 03:08:00.969202 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:00.969316 master-0 kubenswrapper[7776]: I0219 03:08:00.969295 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:00.973601 master-0 kubenswrapper[7776]: I0219 03:08:00.969314 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:00.973601 master-0 kubenswrapper[7776]: I0219 03:08:00.969400 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:02.272829 master-0 kubenswrapper[7776]: E0219 03:08:02.272695 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 03:08:02.491373 master-0 kubenswrapper[7776]: I0219 03:08:02.491213 7776 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-s559q container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Feb 19 03:08:02.491587 master-0 kubenswrapper[7776]: I0219 03:08:02.491391 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" podUID="8f7d8fc8-c313-416f-b62b-b54db9944066" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Feb 19 03:08:02.491587 master-0 kubenswrapper[7776]: I0219 03:08:02.491440 7776 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-s559q container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Feb 19 03:08:02.491587 master-0 kubenswrapper[7776]: I0219 03:08:02.491508 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" podUID="8f7d8fc8-c313-416f-b62b-b54db9944066" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" Feb 19 03:08:04.418866 master-0 kubenswrapper[7776]: I0219 03:08:04.418790 7776 status_manager.go:851] "Failed to get status for pod" podUID="e66ac991-af58-490b-8909-e518d301e1b8" pod="openshift-kube-controller-manager/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 19 03:08:05.972661 master-0 kubenswrapper[7776]: E0219 03:08:05.972543 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:08:05.972661 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed" Netns:"/var/run/netns/0a33b13c-179e-491a-9eab-2e76b6c979eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:05.972661 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:05.972661 master-0 kubenswrapper[7776]: > Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: E0219 03:08:05.972766 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed" Netns:"/var/run/netns/0a33b13c-179e-491a-9eab-2e76b6c979eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: > pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: E0219 03:08:05.972805 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed" Netns:"/var/run/netns/0a33b13c-179e-491a-9eab-2e76b6c979eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: > pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:08:05.973749 master-0 kubenswrapper[7776]: E0219 03:08:05.972944 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-4-master-0_openshift-kube-scheduler(66b05aeb-22a8-4008-a582-072f63cc46bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-4-master-0_openshift-kube-scheduler(66b05aeb-22a8-4008-a582-072f63cc46bf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed\\\" Netns:\\\"/var/run/netns/0a33b13c-179e-491a-9eab-2e76b6c979eb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-4-master-0" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" Feb 19 03:08:06.806031 master-0 kubenswrapper[7776]: E0219 03:08:06.805942 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:08:06.806031 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f" Netns:"/var/run/netns/022b6cc3-732e-4cd5-a252-382db37429c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.806031 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.806031 master-0 kubenswrapper[7776]: > Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: E0219 03:08:06.806069 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f" Netns:"/var/run/netns/022b6cc3-732e-4cd5-a252-382db37429c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: E0219 03:08:06.806126 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f" Netns:"/var/run/netns/022b6cc3-732e-4cd5-a252-382db37429c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:08:06.806463 master-0 kubenswrapper[7776]: E0219 03:08:06.806305 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-master-0_openshift-kube-apiserver(1bddb3a1-41bd-4314-bfb0-3c72ca14200f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-master-0_openshift-kube-apiserver(1bddb3a1-41bd-4314-bfb0-3c72ca14200f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f\\\" Netns:\\\"/var/run/netns/022b6cc3-732e-4cd5-a252-382db37429c5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-master-0" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Feb 19 03:08:06.907090 master-0 kubenswrapper[7776]: I0219 03:08:06.900812 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:08:06.907090 master-0 kubenswrapper[7776]: I0219 03:08:06.900903 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:08:06.907090 master-0 kubenswrapper[7776]: I0219 03:08:06.900925 7776 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:08:06.907090 master-0 kubenswrapper[7776]: I0219 03:08:06.901020 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:08:06.967477 master-0 kubenswrapper[7776]: E0219 03:08:06.967400 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:08:06.967477 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67" Netns:"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.967477 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.967477 master-0 kubenswrapper[7776]: > Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: E0219 03:08:06.967498 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67" Netns:"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: > pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: E0219 03:08:06.967524 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67" Netns:"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: > pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:08:06.967694 master-0 kubenswrapper[7776]: E0219 03:08:06.967614 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api(0664d88f-f697-4182-93cd-f208ff6f3ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api(0664d88f-f697-4182-93cd-f208ff6f3ac2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67\\\" Netns:\\\"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" podUID="0664d88f-f697-4182-93cd-f208ff6f3ac2" Feb 19 03:08:06.985171 master-0 kubenswrapper[7776]: E0219 03:08:06.985072 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:08:06.985171 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28" Netns:"/var/run/netns/c15c223d-6b03-4ed7-8eaf-67b2dd54ad96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.985171 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.985171 master-0 kubenswrapper[7776]: > Feb 19 03:08:06.985418 master-0 kubenswrapper[7776]: E0219 03:08:06.985215 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:08:06.985418 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28" Netns:"/var/run/netns/c15c223d-6b03-4ed7-8eaf-67b2dd54ad96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.985418 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.985418 master-0 kubenswrapper[7776]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:08:06.985798 master-0 kubenswrapper[7776]: E0219 03:08:06.985743 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:08:06.985798 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28" Netns:"/var/run/netns/c15c223d-6b03-4ed7-8eaf-67b2dd54ad96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:08:06.985798 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:08:06.985798 master-0 kubenswrapper[7776]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:08:06.985997 master-0 kubenswrapper[7776]: E0219 03:08:06.985895 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28\\\" Netns:\\\"/var/run/netns/c15c223d-6b03-4ed7-8eaf-67b2dd54ad96\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Feb 19 03:08:08.542636 master-0 kubenswrapper[7776]: I0219 03:08:08.542513 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/0.log" Feb 19 03:08:08.542636 master-0 kubenswrapper[7776]: I0219 03:08:08.542583 7776 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb" exitCode=1 Feb 19 03:08:09.880579 master-0 kubenswrapper[7776]: E0219 03:08:09.880484 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:08:09.881460 master-0 kubenswrapper[7776]: E0219 03:08:09.880708 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.015s" Feb 19 03:08:09.881460 master-0 kubenswrapper[7776]: I0219 03:08:09.880740 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:08:09.881460 master-0 kubenswrapper[7776]: I0219 03:08:09.880789 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:08:09.882298 master-0 kubenswrapper[7776]: I0219 03:08:09.882067 7776 scope.go:117] "RemoveContainer" containerID="a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced" Feb 19 03:08:09.883805 master-0 kubenswrapper[7776]: I0219 03:08:09.883020 7776 scope.go:117] "RemoveContainer" containerID="24791f1c363b144877c645c4f1432f887b6ed95f1fe6b262a78611e4e7415851" Feb 19 03:08:09.883805 master-0 kubenswrapper[7776]: I0219 03:08:09.883617 7776 scope.go:117] "RemoveContainer" containerID="617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc" Feb 19 03:08:09.883805 master-0 kubenswrapper[7776]: I0219 03:08:09.883713 7776 scope.go:117] "RemoveContainer" containerID="63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5" Feb 19 03:08:09.884778 master-0 kubenswrapper[7776]: I0219 03:08:09.884348 7776 scope.go:117] "RemoveContainer" containerID="86c664ab293aa817dc19559e0b69114daede98d8ba6acf0a72b18f40ca2b5774" Feb 19 03:08:09.884778 master-0 kubenswrapper[7776]: I0219 03:08:09.884482 7776 scope.go:117] "RemoveContainer" containerID="336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb" Feb 19 03:08:09.896108 master-0 kubenswrapper[7776]: I0219 03:08:09.894210 7776 scope.go:117] "RemoveContainer" containerID="f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45" Feb 19 03:08:09.898132 master-0 kubenswrapper[7776]: I0219 03:08:09.898013 7776 scope.go:117] "RemoveContainer" containerID="8b3bceeaced74d609ab5cae3f8bcf4b942c0f6e35aacd59b863ae5c7bc32a8c0" Feb 19 03:08:09.898656 master-0 kubenswrapper[7776]: I0219 03:08:09.898588 7776 scope.go:117] "RemoveContainer" containerID="bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24" Feb 19 03:08:09.899517 master-0 kubenswrapper[7776]: I0219 03:08:09.899324 7776 scope.go:117] "RemoveContainer" containerID="5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2" Feb 19 03:08:09.901657 master-0 kubenswrapper[7776]: I0219 03:08:09.901407 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:08:10.561826 master-0 kubenswrapper[7776]: I0219 03:08:10.561762 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/0.log" Feb 19 03:08:10.564383 master-0 kubenswrapper[7776]: I0219 03:08:10.564284 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/1.log" Feb 19 03:08:10.564984 master-0 kubenswrapper[7776]: I0219 03:08:10.564944 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:08:10.571600 master-0 kubenswrapper[7776]: I0219 03:08:10.571523 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/0.log" Feb 19 03:08:10.969381 master-0 kubenswrapper[7776]: I0219 03:08:10.969315 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:10.970561 master-0 kubenswrapper[7776]: I0219 03:08:10.970485 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:12.618001 master-0 kubenswrapper[7776]: E0219 03:08:12.617815 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-9h524.189586e92df36aa8 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-9h524,UID:9789abc0-e82f-4d1a-ba50-faf0075d9139,APIVersion:v1,ResourceVersion:7086,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 20.923s (20.923s including waiting). Image size: 1234172623 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.496587432 +0000 UTC m=+70.836271950,LastTimestamp:2026-02-19 03:06:04.496587432 +0000 UTC m=+70.836271950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:08:16.111600 master-0 kubenswrapper[7776]: E0219 03:08:16.111380 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:08:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:08:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:08:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:08:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Feb 19 03:08:18.673773 master-0 kubenswrapper[7776]: E0219 03:08:18.673634 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:08:20.969293 master-0 kubenswrapper[7776]: I0219 03:08:20.969113 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:20.969293 master-0 kubenswrapper[7776]: I0219 03:08:20.969180 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:20.969293 master-0 kubenswrapper[7776]: I0219 03:08:20.969239 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:20.970215 master-0 kubenswrapper[7776]: I0219 03:08:20.969318 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:22.907516 master-0 kubenswrapper[7776]: E0219 03:08:22.907403 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:08:23.665273 master-0 kubenswrapper[7776]: I0219 03:08:23.665193 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="046d70c2b21433494090acc4c51a4da67355986430805c8b776a5852975555f0" exitCode=0 Feb 19 03:08:26.112633 master-0 kubenswrapper[7776]: E0219 03:08:26.112397 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:08:30.715095 master-0 kubenswrapper[7776]: I0219 03:08:30.715032 7776 generic.go:334] "Generic (PLEG): container finished" podID="15a571c6-7c47-4b57-bc5b-e46544a114c8" containerID="0f3766857d0863e0c7bf5650275239873c534f3ae3d01d3445961163b616988a" exitCode=0 Feb 19 03:08:30.969506 master-0 kubenswrapper[7776]: I0219 03:08:30.969319 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:30.969506 master-0 kubenswrapper[7776]: I0219 03:08:30.969396 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:35.675183 master-0 kubenswrapper[7776]: E0219 03:08:35.675092 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:08:36.113448 master-0 kubenswrapper[7776]: E0219 03:08:36.113228 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:08:40.968566 master-0 kubenswrapper[7776]: I0219 03:08:40.968483 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:40.969292 master-0 kubenswrapper[7776]: I0219 03:08:40.968570 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:40.969292 master-0 kubenswrapper[7776]: I0219 03:08:40.968627 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:40.969292 master-0 kubenswrapper[7776]: I0219 03:08:40.968720 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:43.904450 master-0 kubenswrapper[7776]: E0219 03:08:43.904352 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:08:43.905204 master-0 kubenswrapper[7776]: E0219 03:08:43.904589 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.023s" Feb 19 03:08:43.914624 master-0 kubenswrapper[7776]: I0219 03:08:43.914567 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:08:46.114206 master-0 kubenswrapper[7776]: E0219 03:08:46.114088 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:08:46.620653 master-0 kubenswrapper[7776]: E0219 03:08:46.620487 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-lwt4t.189586e92edd861b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-lwt4t,UID:76050135-a8a1-4968-9a00-2d251c17f8b8,APIVersion:v1,ResourceVersion:6746,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 23s (23s including waiting). Image size: 1202767548 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.511929883 +0000 UTC m=+70.851614411,LastTimestamp:2026-02-19 03:06:04.511929883 +0000 UTC m=+70.851614411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:08:50.968861 master-0 kubenswrapper[7776]: I0219 03:08:50.968790 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:08:50.969363 master-0 kubenswrapper[7776]: I0219 03:08:50.968867 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:08:52.676792 master-0 kubenswrapper[7776]: E0219 03:08:52.676670 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:08:56.115243 master-0 kubenswrapper[7776]: E0219 03:08:56.115120 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:08:56.115243 master-0 kubenswrapper[7776]: E0219 03:08:56.115187 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:09:00.969370 master-0 kubenswrapper[7776]: I0219 03:09:00.969226 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:00.970408 master-0 kubenswrapper[7776]: I0219 03:09:00.969375 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:04.420116 master-0 kubenswrapper[7776]: I0219 03:09:04.420043 7776 status_manager.go:851] "Failed to get status for pod" podUID="98ac5423-b231-44e5-9545-424d635ed6ee" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c75f78c8b-8tbg8)" Feb 19 03:09:09.677956 master-0 kubenswrapper[7776]: E0219 03:09:09.677866 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:09:10.968998 master-0 kubenswrapper[7776]: I0219 03:09:10.968887 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:10.969551 master-0 kubenswrapper[7776]: I0219 03:09:10.968997 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:11.953094 master-0 kubenswrapper[7776]: I0219 03:09:11.953031 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca" exitCode=1 Feb 19 03:09:16.276357 master-0 kubenswrapper[7776]: E0219 03:09:16.276109 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:09:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:09:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:09:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:09:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:09:17.917982 master-0 kubenswrapper[7776]: E0219 03:09:17.917918 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:09:17.918820 master-0 kubenswrapper[7776]: E0219 03:09:17.918765 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 19 03:09:17.918923 master-0 kubenswrapper[7776]: I0219 03:09:17.918848 7776 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" containerID="cri-o://a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced" Feb 19 03:09:17.918923 master-0 kubenswrapper[7776]: I0219 03:09:17.918874 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:09:17.919064 master-0 kubenswrapper[7776]: I0219 03:09:17.918926 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:09:17.930202 master-0 kubenswrapper[7776]: I0219 03:09:17.930113 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:09:20.623199 master-0 kubenswrapper[7776]: E0219 03:09:20.622975 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-spsn7.189586e92fca7cac openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-spsn7,UID:543aef8d-960a-42c9-b1fd-954e2d024002,APIVersion:v1,ResourceVersion:6831,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 23.015s (23.015s including waiting). Image size: 1702667973 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.5274595 +0000 UTC m=+70.867144018,LastTimestamp:2026-02-19 03:06:04.5274595 +0000 UTC m=+70.867144018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:09:20.969627 master-0 kubenswrapper[7776]: I0219 03:09:20.968936 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:20.969627 master-0 kubenswrapper[7776]: I0219 03:09:20.969627 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:26.277025 master-0 kubenswrapper[7776]: E0219 03:09:26.276960 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:09:26.678709 master-0 kubenswrapper[7776]: E0219 03:09:26.678532 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:09:30.969101 master-0 kubenswrapper[7776]: I0219 03:09:30.969054 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:30.969729 master-0 kubenswrapper[7776]: I0219 03:09:30.969127 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:36.277833 master-0 kubenswrapper[7776]: E0219 03:09:36.277757 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:09:40.969314 master-0 kubenswrapper[7776]: I0219 03:09:40.969186 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:40.969314 master-0 kubenswrapper[7776]: I0219 03:09:40.969274 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:41.117190 master-0 kubenswrapper[7776]: I0219 03:09:41.117138 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/1.log" Feb 19 03:09:41.117573 master-0 kubenswrapper[7776]: I0219 03:09:41.117525 7776 generic.go:334] "Generic (PLEG): container finished" podID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" containerID="defe4f7170cc44c3523dd8efff39d38897244ccd7ed44fbd45efb9c3c2bb106e" exitCode=255 Feb 19 03:09:41.120206 master-0 kubenswrapper[7776]: I0219 03:09:41.120168 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/1.log" Feb 19 03:09:41.120818 master-0 kubenswrapper[7776]: I0219 03:09:41.120767 7776 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="5eb64e0fb5be78f5fe0053450317b4c553cf5bab1e4fc27dc7ed6c83f4c5c9d7" exitCode=255 Feb 19 03:09:41.122714 master-0 kubenswrapper[7776]: I0219 03:09:41.122678 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/1.log" Feb 19 03:09:41.123164 master-0 kubenswrapper[7776]: I0219 03:09:41.123120 7776 generic.go:334] "Generic (PLEG): container finished" podID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" containerID="c26f9dd77de93381b32286d233ebe8a661621d7ab6999e089af78dc321bb05ed" exitCode=255 Feb 19 03:09:41.124764 master-0 kubenswrapper[7776]: I0219 03:09:41.124719 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/2.log" Feb 19 03:09:41.125198 master-0 kubenswrapper[7776]: I0219 03:09:41.125130 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/1.log" Feb 19 03:09:41.125779 master-0 kubenswrapper[7776]: I0219 03:09:41.125738 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:09:41.125779 master-0 kubenswrapper[7776]: I0219 03:09:41.125774 7776 generic.go:334] "Generic (PLEG): container finished" podID="05c9cb4a-5249-4116-a2e5-caa7859e2075" containerID="20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c" exitCode=255 Feb 19 03:09:41.127213 master-0 kubenswrapper[7776]: I0219 03:09:41.127174 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/1.log" Feb 19 03:09:41.127624 master-0 kubenswrapper[7776]: I0219 03:09:41.127538 7776 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="65990edcc46b375933fbda1eec1ec1a04dd2a02112107f18658b1af8d7458102" exitCode=255 Feb 19 03:09:41.129163 master-0 kubenswrapper[7776]: I0219 03:09:41.129121 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/1.log" Feb 19 03:09:41.129736 master-0 kubenswrapper[7776]: I0219 03:09:41.129703 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/0.log" Feb 19 03:09:41.129798 master-0 kubenswrapper[7776]: I0219 03:09:41.129737 7776 generic.go:334] "Generic (PLEG): container finished" podID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" containerID="3526ed2fea950f5feea7370e198355ca1c87bb7826298c9748a04ae14fb0f72d" exitCode=255 Feb 19 03:09:42.137374 master-0 kubenswrapper[7776]: I0219 03:09:42.137309 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/1.log" Feb 19 03:09:42.138234 master-0 kubenswrapper[7776]: I0219 03:09:42.138185 7776 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="c2c37fa8442b4703e54aab94b6a44d53dfb0bc5765d90a9a7ef5662786b2cd74" exitCode=255 Feb 19 03:09:43.680573 master-0 kubenswrapper[7776]: E0219 03:09:43.680460 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:09:46.278903 master-0 kubenswrapper[7776]: E0219 03:09:46.278843 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:09:50.968713 master-0 kubenswrapper[7776]: I0219 03:09:50.968664 7776 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhj9q container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 19 03:09:50.969392 master-0 kubenswrapper[7776]: I0219 03:09:50.969355 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" podUID="7012676e-f35d-46e5-83e8-a63172dd076e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 19 03:09:51.933197 master-0 kubenswrapper[7776]: E0219 03:09:51.933089 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:09:51.933563 master-0 kubenswrapper[7776]: E0219 03:09:51.933402 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 19 03:09:51.933803 master-0 kubenswrapper[7776]: I0219 03:09:51.933739 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:09:51.934079 master-0 kubenswrapper[7776]: I0219 03:09:51.934044 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:09:51.934313 master-0 kubenswrapper[7776]: I0219 03:09:51.934184 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"baf56418e5f8bfbb1b0b3b62a17157021582596ac9b77253725abcedbc9830bb"} Feb 19 03:09:51.935322 master-0 kubenswrapper[7776]: I0219 03:09:51.934947 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerDied","Data":"24791f1c363b144877c645c4f1432f887b6ed95f1fe6b262a78611e4e7415851"} Feb 19 03:09:51.935403 master-0 kubenswrapper[7776]: I0219 03:09:51.934639 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:09:51.935460 master-0 kubenswrapper[7776]: I0219 03:09:51.935434 7776 scope.go:117] "RemoveContainer" containerID="24791f1c363b144877c645c4f1432f887b6ed95f1fe6b262a78611e4e7415851" Feb 19 03:09:51.935506 master-0 kubenswrapper[7776]: I0219 03:09:51.934388 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:09:51.935571 master-0 kubenswrapper[7776]: I0219 03:09:51.935079 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:09:51.935656 master-0 kubenswrapper[7776]: I0219 03:09:51.935635 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:09:51.935957 master-0 kubenswrapper[7776]: I0219 03:09:51.935928 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:09:51.936027 master-0 kubenswrapper[7776]: I0219 03:09:51.935909 7776 scope.go:117] "RemoveContainer" containerID="c26f9dd77de93381b32286d233ebe8a661621d7ab6999e089af78dc321bb05ed" Feb 19 03:09:51.936027 master-0 kubenswrapper[7776]: I0219 03:09:51.936018 7776 scope.go:117] "RemoveContainer" containerID="c2c37fa8442b4703e54aab94b6a44d53dfb0bc5765d90a9a7ef5662786b2cd74" Feb 19 03:09:51.937045 master-0 kubenswrapper[7776]: I0219 03:09:51.936178 7776 scope.go:117] "RemoveContainer" containerID="65990edcc46b375933fbda1eec1ec1a04dd2a02112107f18658b1af8d7458102" Feb 19 03:09:51.937045 master-0 kubenswrapper[7776]: I0219 03:09:51.936196 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:09:51.937045 master-0 kubenswrapper[7776]: I0219 03:09:51.936797 7776 scope.go:117] "RemoveContainer" containerID="defe4f7170cc44c3523dd8efff39d38897244ccd7ed44fbd45efb9c3c2bb106e" Feb 19 03:09:51.937434 master-0 kubenswrapper[7776]: I0219 03:09:51.937101 7776 scope.go:117] "RemoveContainer" containerID="3526ed2fea950f5feea7370e198355ca1c87bb7826298c9748a04ae14fb0f72d" Feb 19 03:09:51.939693 master-0 kubenswrapper[7776]: I0219 03:09:51.939665 7776 scope.go:117] "RemoveContainer" containerID="5eb64e0fb5be78f5fe0053450317b4c553cf5bab1e4fc27dc7ed6c83f4c5c9d7" Feb 19 03:09:51.940058 master-0 kubenswrapper[7776]: I0219 03:09:51.940037 7776 scope.go:117] "RemoveContainer" containerID="b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca" Feb 19 03:09:51.940580 master-0 kubenswrapper[7776]: I0219 03:09:51.940463 7776 scope.go:117] "RemoveContainer" containerID="9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb" Feb 19 03:09:51.940580 master-0 kubenswrapper[7776]: I0219 03:09:51.940528 7776 scope.go:117] "RemoveContainer" containerID="48896fb51d13a46ede8e9679a55d5198adfa5eeb4a91ae305507c9b4bf39a65b" Feb 19 03:09:51.940960 master-0 kubenswrapper[7776]: I0219 03:09:51.940792 7776 scope.go:117] "RemoveContainer" containerID="63378086041fcb0de956f1a5a160faad6c0e85b100c25eacbce569a26a79079c" Feb 19 03:09:51.942211 master-0 kubenswrapper[7776]: I0219 03:09:51.941889 7776 scope.go:117] "RemoveContainer" containerID="0f3766857d0863e0c7bf5650275239873c534f3ae3d01d3445961163b616988a" Feb 19 03:09:51.942377 master-0 kubenswrapper[7776]: I0219 03:09:51.942340 7776 scope.go:117] "RemoveContainer" containerID="20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c" Feb 19 03:09:51.942572 master-0 kubenswrapper[7776]: E0219 03:09:51.942507 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" podUID="05c9cb4a-5249-4116-a2e5-caa7859e2075" Feb 19 03:09:51.945052 master-0 kubenswrapper[7776]: I0219 03:09:51.944922 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:09:52.206411 master-0 kubenswrapper[7776]: I0219 03:09:52.198404 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/1.log" Feb 19 03:09:52.210662 master-0 kubenswrapper[7776]: I0219 03:09:52.210628 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/1.log" Feb 19 03:09:53.217748 master-0 kubenswrapper[7776]: I0219 03:09:53.217674 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/0.log" Feb 19 03:09:53.220009 master-0 kubenswrapper[7776]: I0219 03:09:53.219972 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/1.log" Feb 19 03:09:53.222427 master-0 kubenswrapper[7776]: I0219 03:09:53.222384 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/1.log" Feb 19 03:09:53.225677 master-0 kubenswrapper[7776]: I0219 03:09:53.225628 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/0.log" Feb 19 03:09:53.229921 master-0 kubenswrapper[7776]: I0219 03:09:53.229883 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/1.log" Feb 19 03:09:53.230470 master-0 kubenswrapper[7776]: I0219 03:09:53.230442 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/0.log" Feb 19 03:09:53.234649 master-0 kubenswrapper[7776]: I0219 03:09:53.234621 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/0.log" Feb 19 03:09:53.237363 master-0 kubenswrapper[7776]: I0219 03:09:53.237318 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/1.log" Feb 19 03:09:53.239782 master-0 kubenswrapper[7776]: I0219 03:09:53.239749 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/1.log" Feb 19 03:09:54.626403 master-0 kubenswrapper[7776]: E0219 03:09:54.626228 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{controller-manager-7d4cccb57c-sfb9j.189586e92fd62296 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-7d4cccb57c-sfb9j,UID:92b9ea7b-01b1-48f8-a392-12200f55502e,APIVersion:v1,ResourceVersion:7358,FieldPath:spec.containers{controller-manager},},Reason:Created,Message:Created container: controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.52822287 +0000 UTC m=+70.867907398,LastTimestamp:2026-02-19 03:06:04.52822287 +0000 UTC m=+70.867907398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:09:56.279800 master-0 kubenswrapper[7776]: E0219 03:09:56.279646 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:09:56.279800 master-0 kubenswrapper[7776]: E0219 03:09:56.279772 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:10:00.682293 master-0 kubenswrapper[7776]: E0219 03:10:00.682130 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:10:04.422637 master-0 kubenswrapper[7776]: I0219 03:10:04.422530 7776 status_manager.go:851] "Failed to get status for pod" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods controller-manager-7d4cccb57c-sfb9j)" Feb 19 03:10:04.545507 master-0 kubenswrapper[7776]: I0219 03:10:04.545416 7776 scope.go:117] "RemoveContainer" containerID="ea7babb48d9acc19a51058d43972a14b4a1ed0d3f15fadbbc95a57a23953a57e" Feb 19 03:10:04.956979 master-0 kubenswrapper[7776]: E0219 03:10:04.956905 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:16.299062 master-0 kubenswrapper[7776]: E0219 03:10:16.298736 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:10:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:10:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:10:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:10:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:10:17.684376 master-0 kubenswrapper[7776]: E0219 03:10:17.684246 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:10:19.365063 master-0 kubenswrapper[7776]: E0219 03:10:19.364952 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:22.463672 master-0 kubenswrapper[7776]: I0219 03:10:22.463604 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/1.log" Feb 19 03:10:22.464549 master-0 kubenswrapper[7776]: I0219 03:10:22.464473 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/0.log" Feb 19 03:10:22.464549 master-0 kubenswrapper[7776]: I0219 03:10:22.464525 7776 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3" exitCode=1 Feb 19 03:10:25.947709 master-0 kubenswrapper[7776]: E0219 03:10:25.947604 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: E0219 03:10:25.947855 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.947939 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.947963 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerDied","Data":"8b3bceeaced74d609ab5cae3f8bcf4b942c0f6e35aacd59b863ae5c7bc32a8c0"} Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948007 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948097 7776 scope.go:117] "RemoveContainer" containerID="8b3bceeaced74d609ab5cae3f8bcf4b942c0f6e35aacd59b863ae5c7bc32a8c0" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948124 7776 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" containerID="cri-o://63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948172 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948385 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:25.948722 master-0 kubenswrapper[7776]: I0219 03:10:25.948453 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.948819 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.948844 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.949077 7776 scope.go:117] "RemoveContainer" containerID="20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c" Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.948857 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerDied","Data":"f6706a38252937f6734b664a0f078763a45b428cf03e52f78ca141868385452d"} Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.949536 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.949547 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"86c664ab293aa817dc19559e0b69114daede98d8ba6acf0a72b18f40ca2b5774"} Feb 19 03:10:25.950145 master-0 kubenswrapper[7776]: I0219 03:10:25.949561 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:10:25.961156 master-0 kubenswrapper[7776]: I0219 03:10:25.960521 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:10:25.981583 master-0 kubenswrapper[7776]: I0219 03:10:25.981511 7776 scope.go:117] "RemoveContainer" containerID="86c664ab293aa817dc19559e0b69114daede98d8ba6acf0a72b18f40ca2b5774" Feb 19 03:10:26.299509 master-0 kubenswrapper[7776]: E0219 03:10:26.299246 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 19 03:10:26.492623 master-0 kubenswrapper[7776]: I0219 03:10:26.492550 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/1.log" Feb 19 03:10:26.494285 master-0 kubenswrapper[7776]: I0219 03:10:26.494239 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/1.log" Feb 19 03:10:26.496118 master-0 kubenswrapper[7776]: I0219 03:10:26.496077 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/2.log" Feb 19 03:10:26.496636 master-0 kubenswrapper[7776]: I0219 03:10:26.496604 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/1.log" Feb 19 03:10:26.497171 master-0 kubenswrapper[7776]: I0219 03:10:26.497146 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/0.log" Feb 19 03:10:28.629318 master-0 kubenswrapper[7776]: E0219 03:10:28.629050 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{controller-manager-7d4cccb57c-sfb9j.189586e93206ace3 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-7d4cccb57c-sfb9j,UID:92b9ea7b-01b1-48f8-a392-12200f55502e,APIVersion:v1,ResourceVersion:7358,FieldPath:spec.containers{controller-manager},},Reason:Started,Message:Started container controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:06:04.564958435 +0000 UTC m=+70.904642943,LastTimestamp:2026-02-19 03:06:04.564958435 +0000 UTC m=+70.904642943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: E0219 03:10:29.439659 7776 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.49s" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439727 7776 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439747 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439769 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439783 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439794 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439809 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerDied","Data":"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb"} Feb 19 03:10:29.440740 master-0 kubenswrapper[7776]: I0219 03:10:29.439838 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerDied","Data":"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2"} Feb 19 03:10:29.441665 master-0 kubenswrapper[7776]: I0219 03:10:29.440766 7776 scope.go:117] "RemoveContainer" containerID="336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb" Feb 19 03:10:29.459227 master-0 kubenswrapper[7776]: I0219 03:10:29.454411 7776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:10:29.459227 master-0 kubenswrapper[7776]: I0219 03:10:29.459016 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 19 03:10:29.462563 master-0 kubenswrapper[7776]: W0219 03:10:29.462514 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod66b05aeb_22a8_4008_a582_072f63cc46bf.slice/crio-965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd WatchSource:0}: Error finding container 965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd: Status 404 returned error can't find the container with id 965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd Feb 19 03:10:29.465148 master-0 kubenswrapper[7776]: I0219 03:10:29.465086 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:10:29.465148 master-0 kubenswrapper[7776]: I0219 03:10:29.465132 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:29.465408 master-0 kubenswrapper[7776]: I0219 03:10:29.465160 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerStarted","Data":"246e246788c76f41235c1898d383b771146f06c3b5bc939889392a3b403a8a89"} Feb 19 03:10:29.465408 master-0 kubenswrapper[7776]: I0219 03:10:29.465282 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:10:29.465408 master-0 kubenswrapper[7776]: I0219 03:10:29.465315 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:29.465408 master-0 kubenswrapper[7776]: I0219 03:10:29.465330 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerDied","Data":"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc"} Feb 19 03:10:29.465408 master-0 kubenswrapper[7776]: I0219 03:10:29.465404 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465432 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465451 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerDied","Data":"48896fb51d13a46ede8e9679a55d5198adfa5eeb4a91ae305507c9b4bf39a65b"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465466 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerDied","Data":"a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465480 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerDied","Data":"63378086041fcb0de956f1a5a160faad6c0e85b100c25eacbce569a26a79079c"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465494 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerDied","Data":"63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465508 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerDied","Data":"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465522 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465536 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"5eb64e0fb5be78f5fe0053450317b4c553cf5bab1e4fc27dc7ed6c83f4c5c9d7"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465548 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465558 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerStarted","Data":"027172ba4dcd10cd3e3177cc36691683dffc4cdf627b8d23cdb2d10cafe015ef"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465570 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465581 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"65990edcc46b375933fbda1eec1ec1a04dd2a02112107f18658b1af8d7458102"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465595 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"c2c37fa8442b4703e54aab94b6a44d53dfb0bc5765d90a9a7ef5662786b2cd74"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465610 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerStarted","Data":"3526ed2fea950f5feea7370e198355ca1c87bb7826298c9748a04ae14fb0f72d"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465622 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerStarted","Data":"defe4f7170cc44c3523dd8efff39d38897244ccd7ed44fbd45efb9c3c2bb106e"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465635 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerStarted","Data":"c26f9dd77de93381b32286d233ebe8a661621d7ab6999e089af78dc321bb05ed"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465646 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465657 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"046d70c2b21433494090acc4c51a4da67355986430805c8b776a5852975555f0"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465672 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerDied","Data":"0f3766857d0863e0c7bf5650275239873c534f3ae3d01d3445961163b616988a"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465687 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465700 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerDied","Data":"defe4f7170cc44c3523dd8efff39d38897244ccd7ed44fbd45efb9c3c2bb106e"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465713 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"5eb64e0fb5be78f5fe0053450317b4c553cf5bab1e4fc27dc7ed6c83f4c5c9d7"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465724 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerDied","Data":"c26f9dd77de93381b32286d233ebe8a661621d7ab6999e089af78dc321bb05ed"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465737 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerDied","Data":"20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465749 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerDied","Data":"65990edcc46b375933fbda1eec1ec1a04dd2a02112107f18658b1af8d7458102"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465761 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerDied","Data":"3526ed2fea950f5feea7370e198355ca1c87bb7826298c9748a04ae14fb0f72d"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465774 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerDied","Data":"c2c37fa8442b4703e54aab94b6a44d53dfb0bc5765d90a9a7ef5662786b2cd74"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465789 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465802 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465813 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465825 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerStarted","Data":"a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465837 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465849 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerStarted","Data":"f288826ba3365168a27108ffc9be5733bebebaf28a3b66f0962898e5aed02b61"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465860 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerStarted","Data":"86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465871 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465882 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerStarted","Data":"85c05765f6dadb3299427fcae734f7bc6d46d71d6d24a21ddaf8cbc81b5c9220"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465893 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465904 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerStarted","Data":"d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465915 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d00d0015e8bc8366633040b3a2395621233f7e465c498eaceabf1c2ca81a68df"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465929 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"5f196045e7a49065565ab56d461035e763d23606fb829b8bba14d2bd33107c85"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465939 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"0d59a154aee140da2db56a6e0463015b3387b4ee37b044b39e5717b27d05498e"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465949 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"59889423cb55bd5f516727f3ea448fae392c406053adfdbf990a3c929b1d542d"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465961 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"0133c09f4df374eb22f9b8a85932a0aa0def6e89f6e8ee052bbfb01df95791d1"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465972 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3"} Feb 19 03:10:29.466064 master-0 kubenswrapper[7776]: I0219 03:10:29.465985 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55"} Feb 19 03:10:29.469793 master-0 kubenswrapper[7776]: I0219 03:10:29.467206 7776 scope.go:117] "RemoveContainer" containerID="06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3" Feb 19 03:10:29.469793 master-0 kubenswrapper[7776]: E0219 03:10:29.467417 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:10:29.495692 master-0 kubenswrapper[7776]: I0219 03:10:29.493389 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 19 03:10:29.495692 master-0 kubenswrapper[7776]: I0219 03:10:29.493450 7776 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="5b2bcfc5-43c6-491b-9237-8cdac45b403b" Feb 19 03:10:29.495692 master-0 kubenswrapper[7776]: I0219 03:10:29.494683 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5"] Feb 19 03:10:29.498440 master-0 kubenswrapper[7776]: I0219 03:10:29.498385 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 19 03:10:29.500437 master-0 kubenswrapper[7776]: I0219 03:10:29.500397 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 19 03:10:29.502447 master-0 kubenswrapper[7776]: I0219 03:10:29.502410 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 19 03:10:29.503138 master-0 kubenswrapper[7776]: I0219 03:10:29.503109 7776 scope.go:117] "RemoveContainer" containerID="5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2" Feb 19 03:10:29.505206 master-0 kubenswrapper[7776]: I0219 03:10:29.504990 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 19 03:10:29.505206 master-0 kubenswrapper[7776]: I0219 03:10:29.505006 7776 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="5b2bcfc5-43c6-491b-9237-8cdac45b403b" Feb 19 03:10:29.510385 master-0 kubenswrapper[7776]: I0219 03:10:29.506800 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lwt4t" podStartSLOduration=265.75251432 podStartE2EDuration="4m50.506777355s" podCreationTimestamp="2026-02-19 03:05:39 +0000 UTC" firstStartedPulling="2026-02-19 03:05:41.511832043 +0000 UTC m=+47.851516561" lastFinishedPulling="2026-02-19 03:06:06.266095078 +0000 UTC m=+72.605779596" observedRunningTime="2026-02-19 03:10:29.439347713 +0000 UTC m=+335.779032331" watchObservedRunningTime="2026-02-19 03:10:29.506777355 +0000 UTC m=+335.846461873" Feb 19 03:10:29.512981 master-0 kubenswrapper[7776]: I0219 03:10:29.512879 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2cczk" podStartSLOduration=265.827276346 podStartE2EDuration="4m47.512830902s" podCreationTimestamp="2026-02-19 03:05:42 +0000 UTC" firstStartedPulling="2026-02-19 03:05:44.582909725 +0000 UTC m=+50.922594243" lastFinishedPulling="2026-02-19 03:06:06.268464261 +0000 UTC m=+72.608148799" observedRunningTime="2026-02-19 03:10:29.490626072 +0000 UTC m=+335.830310670" watchObservedRunningTime="2026-02-19 03:10:29.512830902 +0000 UTC m=+335.852515420" Feb 19 03:10:29.531902 master-0 kubenswrapper[7776]: I0219 03:10:29.531839 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1bddb3a1-41bd-4314-bfb0-3c72ca14200f","Type":"ContainerStarted","Data":"676fe9b8803826897eb9069682463435a484f2265769bbfbab612ab166fcad61"} Feb 19 03:10:29.533043 master-0 kubenswrapper[7776]: I0219 03:10:29.532744 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:10:29.535232 master-0 kubenswrapper[7776]: I0219 03:10:29.533826 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4","Type":"ContainerStarted","Data":"258078f280458482912939c3338c1981e998a321634b6785079948c05a69b5ce"} Feb 19 03:10:29.539358 master-0 kubenswrapper[7776]: I0219 03:10:29.539328 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/1.log" Feb 19 03:10:29.541959 master-0 kubenswrapper[7776]: I0219 03:10:29.540153 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 19 03:10:29.541959 master-0 kubenswrapper[7776]: I0219 03:10:29.541064 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" event={"ID":"0664d88f-f697-4182-93cd-f208ff6f3ac2","Type":"ContainerStarted","Data":"cbe8c564562ad68c8d52a661bafedb53468d82eca60669d5f75aa1269bf0c5a6"} Feb 19 03:10:29.543867 master-0 kubenswrapper[7776]: I0219 03:10:29.543838 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"66b05aeb-22a8-4008-a582-072f63cc46bf","Type":"ContainerStarted","Data":"965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd"} Feb 19 03:10:29.556692 master-0 kubenswrapper[7776]: I0219 03:10:29.556654 7776 scope.go:117] "RemoveContainer" containerID="617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc" Feb 19 03:10:29.585209 master-0 kubenswrapper[7776]: I0219 03:10:29.585154 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:29.662946 master-0 kubenswrapper[7776]: I0219 03:10:29.662879 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:29.665141 master-0 kubenswrapper[7776]: I0219 03:10:29.665064 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h524" podStartSLOduration=265.977950876 podStartE2EDuration="4m48.665050203s" podCreationTimestamp="2026-02-19 03:05:41 +0000 UTC" firstStartedPulling="2026-02-19 03:05:43.573289458 +0000 UTC m=+49.912973976" lastFinishedPulling="2026-02-19 03:06:06.260388785 +0000 UTC m=+72.600073303" observedRunningTime="2026-02-19 03:10:29.662071196 +0000 UTC m=+336.001755724" watchObservedRunningTime="2026-02-19 03:10:29.665050203 +0000 UTC m=+336.004734721" Feb 19 03:10:29.679406 master-0 kubenswrapper[7776]: I0219 03:10:29.679351 7776 scope.go:117] "RemoveContainer" containerID="bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24" Feb 19 03:10:29.705473 master-0 kubenswrapper[7776]: I0219 03:10:29.705381 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spsn7" podStartSLOduration=264.948770872 podStartE2EDuration="4m49.704018203s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="2026-02-19 03:05:41.511802932 +0000 UTC m=+47.851487450" lastFinishedPulling="2026-02-19 03:06:06.267050243 +0000 UTC m=+72.606734781" observedRunningTime="2026-02-19 03:10:29.693077693 +0000 UTC m=+336.032762221" watchObservedRunningTime="2026-02-19 03:10:29.704018203 +0000 UTC m=+336.043702761" Feb 19 03:10:29.711502 master-0 kubenswrapper[7776]: I0219 03:10:29.711468 7776 scope.go:117] "RemoveContainer" containerID="f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6" Feb 19 03:10:29.760289 master-0 kubenswrapper[7776]: I0219 03:10:29.760234 7776 scope.go:117] "RemoveContainer" containerID="f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45" Feb 19 03:10:29.784564 master-0 kubenswrapper[7776]: I0219 03:10:29.784521 7776 scope.go:117] "RemoveContainer" containerID="1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed" Feb 19 03:10:29.808069 master-0 kubenswrapper[7776]: I0219 03:10:29.808021 7776 scope.go:117] "RemoveContainer" containerID="9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb" Feb 19 03:10:29.849296 master-0 kubenswrapper[7776]: I0219 03:10:29.849205 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" path="/var/lib/kubelet/pods/aba1213d-8a7d-4b99-857f-b66578cc2bec/volumes" Feb 19 03:10:29.866833 master-0 kubenswrapper[7776]: I0219 03:10:29.866777 7776 scope.go:117] "RemoveContainer" containerID="bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24" Feb 19 03:10:29.867587 master-0 kubenswrapper[7776]: E0219 03:10:29.867550 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24\": container with ID starting with bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24 not found: ID does not exist" containerID="bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24" Feb 19 03:10:29.867637 master-0 kubenswrapper[7776]: I0219 03:10:29.867589 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24"} err="failed to get container status \"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24\": rpc error: code = NotFound desc = could not find container \"bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24\": container with ID starting with bb60d671654d8cb8cdffc071f1b5ba39996bb2c8b4602ed2a7dde3cbf60dff24 not found: ID does not exist" Feb 19 03:10:29.867637 master-0 kubenswrapper[7776]: I0219 03:10:29.867613 7776 scope.go:117] "RemoveContainer" containerID="f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6" Feb 19 03:10:29.868168 master-0 kubenswrapper[7776]: E0219 03:10:29.868131 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6\": container with ID starting with f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6 not found: ID does not exist" containerID="f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6" Feb 19 03:10:29.868168 master-0 kubenswrapper[7776]: I0219 03:10:29.868161 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6"} err="failed to get container status \"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6\": rpc error: code = NotFound desc = could not find container \"f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6\": container with ID starting with f281b25004cb9f9d4d3dbdad4cbbd31580646630e9b6b935a101c25de49b79a6 not found: ID does not exist" Feb 19 03:10:29.868276 master-0 kubenswrapper[7776]: I0219 03:10:29.868179 7776 scope.go:117] "RemoveContainer" containerID="5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2" Feb 19 03:10:29.869463 master-0 kubenswrapper[7776]: E0219 03:10:29.869418 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2\": container with ID starting with 5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2 not found: ID does not exist" containerID="5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2" Feb 19 03:10:29.869544 master-0 kubenswrapper[7776]: I0219 03:10:29.869459 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2"} err="failed to get container status \"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2\": rpc error: code = NotFound desc = could not find container \"5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2\": container with ID starting with 5b3ac4d1807b6e67de65b760b29c8e122f6a5fea71ce6fb16d1871cf77fdbda2 not found: ID does not exist" Feb 19 03:10:29.869544 master-0 kubenswrapper[7776]: I0219 03:10:29.869481 7776 scope.go:117] "RemoveContainer" containerID="f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45" Feb 19 03:10:29.870044 master-0 kubenswrapper[7776]: E0219 03:10:29.870010 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45\": container with ID starting with f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45 not found: ID does not exist" containerID="f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45" Feb 19 03:10:29.870106 master-0 kubenswrapper[7776]: I0219 03:10:29.870040 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45"} err="failed to get container status \"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45\": rpc error: code = NotFound desc = could not find container \"f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45\": container with ID starting with f9a8bcdce05adbd678e734e6bc84251b7df69799ba22dc3ffe446a1e3485db45 not found: ID does not exist" Feb 19 03:10:29.870106 master-0 kubenswrapper[7776]: I0219 03:10:29.870060 7776 scope.go:117] "RemoveContainer" containerID="1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed" Feb 19 03:10:29.870414 master-0 kubenswrapper[7776]: E0219 03:10:29.870378 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed\": container with ID starting with 1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed not found: ID does not exist" containerID="1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed" Feb 19 03:10:29.870414 master-0 kubenswrapper[7776]: I0219 03:10:29.870407 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed"} err="failed to get container status \"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed\": rpc error: code = NotFound desc = could not find container \"1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed\": container with ID starting with 1f435fb0fcd6dbf878cb572a8a2ed14e1064a9ce9584f454a6e3ed9b23fad0ed not found: ID does not exist" Feb 19 03:10:29.870558 master-0 kubenswrapper[7776]: I0219 03:10:29.870424 7776 scope.go:117] "RemoveContainer" containerID="617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc" Feb 19 03:10:29.871045 master-0 kubenswrapper[7776]: E0219 03:10:29.870944 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc\": container with ID starting with 617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc not found: ID does not exist" containerID="617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc" Feb 19 03:10:29.871121 master-0 kubenswrapper[7776]: I0219 03:10:29.871025 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc"} err="failed to get container status \"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc\": rpc error: code = NotFound desc = could not find container \"617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc\": container with ID starting with 617f5679ef8937a23786adf049acb6705e13f10388870ec68f3b8b36b61ab0fc not found: ID does not exist" Feb 19 03:10:29.871121 master-0 kubenswrapper[7776]: I0219 03:10:29.871072 7776 scope.go:117] "RemoveContainer" containerID="336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb" Feb 19 03:10:29.871561 master-0 kubenswrapper[7776]: E0219 03:10:29.871515 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb\": container with ID starting with 336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb not found: ID does not exist" containerID="336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb" Feb 19 03:10:29.871623 master-0 kubenswrapper[7776]: I0219 03:10:29.871561 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb"} err="failed to get container status \"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb\": rpc error: code = NotFound desc = could not find container \"336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb\": container with ID starting with 336616c4f167bef54808cee0fa8e63e35e6b43bca8354f4036ad09f3f9d535eb not found: ID does not exist" Feb 19 03:10:29.871623 master-0 kubenswrapper[7776]: I0219 03:10:29.871590 7776 scope.go:117] "RemoveContainer" containerID="9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb" Feb 19 03:10:29.871997 master-0 kubenswrapper[7776]: E0219 03:10:29.871956 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb\": container with ID starting with 9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb not found: ID does not exist" containerID="9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb" Feb 19 03:10:29.872035 master-0 kubenswrapper[7776]: I0219 03:10:29.871997 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb"} err="failed to get container status \"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb\": rpc error: code = NotFound desc = could not find container \"9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb\": container with ID starting with 9b256742ab2eed31c444c314a4d253ff28144b5a75e5b77332aa4dbc1542eceb not found: ID does not exist" Feb 19 03:10:30.077446 master-0 kubenswrapper[7776]: I0219 03:10:30.076009 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:10:30.080128 master-0 kubenswrapper[7776]: I0219 03:10:30.080068 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 19 03:10:30.119044 master-0 kubenswrapper[7776]: I0219 03:10:30.118937 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" podStartSLOduration=271.550219959 podStartE2EDuration="4m50.118913086s" podCreationTimestamp="2026-02-19 03:05:40 +0000 UTC" firstStartedPulling="2026-02-19 03:05:45.808428502 +0000 UTC m=+52.148113020" lastFinishedPulling="2026-02-19 03:06:04.377121589 +0000 UTC m=+70.716806147" observedRunningTime="2026-02-19 03:10:30.117593838 +0000 UTC m=+336.457278396" watchObservedRunningTime="2026-02-19 03:10:30.118913086 +0000 UTC m=+336.458597624" Feb 19 03:10:30.550116 master-0 kubenswrapper[7776]: I0219 03:10:30.550067 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/2.log" Feb 19 03:10:30.551654 master-0 kubenswrapper[7776]: I0219 03:10:30.551616 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1bddb3a1-41bd-4314-bfb0-3c72ca14200f","Type":"ContainerStarted","Data":"a7cd657859866d0c60a8c29ef7e8c20807d578f39873e49c5149373c208aeee5"} Feb 19 03:10:30.552860 master-0 kubenswrapper[7776]: I0219 03:10:30.552802 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4","Type":"ContainerStarted","Data":"ac0c6f1221931d6368270f9300d1e7df26e99f211f84672a8bd222a9935f47ac"} Feb 19 03:10:30.555405 master-0 kubenswrapper[7776]: I0219 03:10:30.555372 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/1.log" Feb 19 03:10:30.557010 master-0 kubenswrapper[7776]: I0219 03:10:30.556982 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/1.log" Feb 19 03:10:30.558804 master-0 kubenswrapper[7776]: I0219 03:10:30.558719 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/1.log" Feb 19 03:10:30.560097 master-0 kubenswrapper[7776]: I0219 03:10:30.560069 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"66b05aeb-22a8-4008-a582-072f63cc46bf","Type":"ContainerStarted","Data":"11a1463d7472cc347eeb1e18662a7476d3fc447a3850f542c02f496029d3a5bf"} Feb 19 03:10:30.576226 master-0 kubenswrapper[7776]: I0219 03:10:30.576114 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=281.576084056 podStartE2EDuration="4m41.576084056s" podCreationTimestamp="2026-02-19 03:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:10:30.571734669 +0000 UTC m=+336.911419207" watchObservedRunningTime="2026-02-19 03:10:30.576084056 +0000 UTC m=+336.915768614" Feb 19 03:10:30.596582 master-0 kubenswrapper[7776]: I0219 03:10:30.596511 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=275.596494283 podStartE2EDuration="4m35.596494283s" podCreationTimestamp="2026-02-19 03:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:10:30.590688643 +0000 UTC m=+336.930373161" watchObservedRunningTime="2026-02-19 03:10:30.596494283 +0000 UTC m=+336.936178801" Feb 19 03:10:30.610045 master-0 kubenswrapper[7776]: I0219 03:10:30.609975 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=280.609959177 podStartE2EDuration="4m40.609959177s" podCreationTimestamp="2026-02-19 03:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:10:30.607809584 +0000 UTC m=+336.947494122" watchObservedRunningTime="2026-02-19 03:10:30.609959177 +0000 UTC m=+336.949643695" Feb 19 03:10:31.402138 master-0 kubenswrapper[7776]: I0219 03:10:31.402032 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:31.577974 master-0 kubenswrapper[7776]: I0219 03:10:31.577761 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" event={"ID":"0664d88f-f697-4182-93cd-f208ff6f3ac2","Type":"ContainerStarted","Data":"47c00fb2c67d340bd7a8f33cdbea3ac43d78e7ccbf383a58ca7fe0117068da43"} Feb 19 03:10:31.620555 master-0 kubenswrapper[7776]: I0219 03:10:31.620458 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" podStartSLOduration=274.847986762 podStartE2EDuration="4m36.620440618s" podCreationTimestamp="2026-02-19 03:05:55 +0000 UTC" firstStartedPulling="2026-02-19 03:10:29.454249618 +0000 UTC m=+335.793934146" lastFinishedPulling="2026-02-19 03:10:31.226703484 +0000 UTC m=+337.566388002" observedRunningTime="2026-02-19 03:10:31.618873662 +0000 UTC m=+337.958558180" watchObservedRunningTime="2026-02-19 03:10:31.620440618 +0000 UTC m=+337.960125136" Feb 19 03:10:31.853626 master-0 kubenswrapper[7776]: I0219 03:10:31.853425 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66ac991-af58-490b-8909-e518d301e1b8" path="/var/lib/kubelet/pods/e66ac991-af58-490b-8909-e518d301e1b8/volumes" Feb 19 03:10:34.403052 master-0 kubenswrapper[7776]: I0219 03:10:34.402971 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:10:34.527092 master-0 kubenswrapper[7776]: I0219 03:10:34.527024 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:10:34.609750 master-0 kubenswrapper[7776]: E0219 03:10:34.609688 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 19 03:10:34.686316 master-0 kubenswrapper[7776]: E0219 03:10:34.686156 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 19 03:10:35.297302 master-0 kubenswrapper[7776]: I0219 03:10:35.297172 7776 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-r7r6p container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 19 03:10:35.299857 master-0 kubenswrapper[7776]: I0219 03:10:35.297333 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 19 03:10:35.299857 master-0 kubenswrapper[7776]: I0219 03:10:35.298065 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:10:35.299857 master-0 kubenswrapper[7776]: I0219 03:10:35.299344 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="etcd-operator" containerStatusID={"Type":"cri-o","ID":"7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc"} pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" containerMessage="Container etcd-operator failed liveness probe, will be restarted" Feb 19 03:10:35.299857 master-0 kubenswrapper[7776]: I0219 03:10:35.299423 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" containerID="cri-o://7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc" gracePeriod=30 Feb 19 03:10:36.300750 master-0 kubenswrapper[7776]: E0219 03:10:36.300558 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:10:36.616569 master-0 kubenswrapper[7776]: I0219 03:10:36.616433 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/2.log" Feb 19 03:10:36.617235 master-0 kubenswrapper[7776]: I0219 03:10:36.617123 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/1.log" Feb 19 03:10:36.617235 master-0 kubenswrapper[7776]: I0219 03:10:36.617240 7776 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc" exitCode=255 Feb 19 03:10:36.617697 master-0 kubenswrapper[7776]: I0219 03:10:36.617337 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc"} Feb 19 03:10:36.617697 master-0 kubenswrapper[7776]: I0219 03:10:36.617424 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936"} Feb 19 03:10:36.617697 master-0 kubenswrapper[7776]: I0219 03:10:36.617467 7776 scope.go:117] "RemoveContainer" containerID="5eb64e0fb5be78f5fe0053450317b4c553cf5bab1e4fc27dc7ed6c83f4c5c9d7" Feb 19 03:10:36.683449 master-0 kubenswrapper[7776]: I0219 03:10:36.683355 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.683335911 podStartE2EDuration="2.683335911s" podCreationTimestamp="2026-02-19 03:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:10:36.679123568 +0000 UTC m=+343.018808096" watchObservedRunningTime="2026-02-19 03:10:36.683335911 +0000 UTC m=+343.023020439" Feb 19 03:10:37.629030 master-0 kubenswrapper[7776]: I0219 03:10:37.628963 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/2.log" Feb 19 03:10:38.638404 master-0 kubenswrapper[7776]: I0219 03:10:38.638307 7776 generic.go:334] "Generic (PLEG): container finished" podID="18b29e37-cda9-41a8-a910-3d8f74be3cf3" containerID="f411fdec6c82335e157399725224c73768983b7340cb840fe930f78c4eff8997" exitCode=0 Feb 19 03:10:38.638404 master-0 kubenswrapper[7776]: I0219 03:10:38.638362 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerDied","Data":"f411fdec6c82335e157399725224c73768983b7340cb840fe930f78c4eff8997"} Feb 19 03:10:38.639471 master-0 kubenswrapper[7776]: I0219 03:10:38.638918 7776 scope.go:117] "RemoveContainer" containerID="f411fdec6c82335e157399725224c73768983b7340cb840fe930f78c4eff8997" Feb 19 03:10:39.647707 master-0 kubenswrapper[7776]: I0219 03:10:39.647645 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerStarted","Data":"9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1"} Feb 19 03:10:42.668436 master-0 kubenswrapper[7776]: I0219 03:10:42.668317 7776 generic.go:334] "Generic (PLEG): container finished" podID="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" containerID="e01bf7d4c559915b2a5ff79bf9dc359fe2aeec2863993dd1c97dd95da4862d3c" exitCode=0 Feb 19 03:10:42.668436 master-0 kubenswrapper[7776]: I0219 03:10:42.668398 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerDied","Data":"e01bf7d4c559915b2a5ff79bf9dc359fe2aeec2863993dd1c97dd95da4862d3c"} Feb 19 03:10:42.669511 master-0 kubenswrapper[7776]: I0219 03:10:42.669118 7776 scope.go:117] "RemoveContainer" containerID="e01bf7d4c559915b2a5ff79bf9dc359fe2aeec2863993dd1c97dd95da4862d3c" Feb 19 03:10:42.842308 master-0 kubenswrapper[7776]: I0219 03:10:42.842235 7776 scope.go:117] "RemoveContainer" containerID="06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3" Feb 19 03:10:43.014086 master-0 kubenswrapper[7776]: E0219 03:10:43.014044 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78d3ac03_8ba0_40d3_9fc5_cc21f7b4efda.slice/crio-b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:10:43.677034 master-0 kubenswrapper[7776]: I0219 03:10:43.676973 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerStarted","Data":"a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640"} Feb 19 03:10:43.682179 master-0 kubenswrapper[7776]: I0219 03:10:43.682108 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" exitCode=0 Feb 19 03:10:43.682293 master-0 kubenswrapper[7776]: I0219 03:10:43.682167 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a"} Feb 19 03:10:43.682293 master-0 kubenswrapper[7776]: I0219 03:10:43.682218 7776 scope.go:117] "RemoveContainer" containerID="7cf42ee60fa4397f21a2d208681ed170f135d22ae88345ec4aa86dba915a0cc1" Feb 19 03:10:43.682983 master-0 kubenswrapper[7776]: I0219 03:10:43.682952 7776 scope.go:117] "RemoveContainer" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" Feb 19 03:10:43.683466 master-0 kubenswrapper[7776]: E0219 03:10:43.683409 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:10:43.686023 master-0 kubenswrapper[7776]: I0219 03:10:43.685984 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/1.log" Feb 19 03:10:43.686080 master-0 kubenswrapper[7776]: I0219 03:10:43.686044 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5"} Feb 19 03:10:44.402563 master-0 kubenswrapper[7776]: I0219 03:10:44.402430 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:10:44.892896 master-0 kubenswrapper[7776]: I0219 03:10:44.892794 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:10:44.893798 master-0 kubenswrapper[7776]: I0219 03:10:44.893515 7776 scope.go:117] "RemoveContainer" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" Feb 19 03:10:44.893886 master-0 kubenswrapper[7776]: E0219 03:10:44.893801 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:10:44.920125 master-0 kubenswrapper[7776]: I0219 03:10:44.920062 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:10:45.699581 master-0 kubenswrapper[7776]: I0219 03:10:45.699499 7776 scope.go:117] "RemoveContainer" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" Feb 19 03:10:45.699867 master-0 kubenswrapper[7776]: E0219 03:10:45.699736 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:10:47.718474 master-0 kubenswrapper[7776]: I0219 03:10:47.718407 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" exitCode=0 Feb 19 03:10:47.719066 master-0 kubenswrapper[7776]: I0219 03:10:47.718550 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a"} Feb 19 03:10:47.720064 master-0 kubenswrapper[7776]: I0219 03:10:47.720008 7776 scope.go:117] "RemoveContainer" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" Feb 19 03:10:47.721133 master-0 kubenswrapper[7776]: I0219 03:10:47.721038 7776 generic.go:334] "Generic (PLEG): container finished" podID="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" containerID="9bdce3951fee565e17f2d28d3fa9bab8451b2a0d85b9fde5d5703fd5c2bc6773" exitCode=0 Feb 19 03:10:47.721233 master-0 kubenswrapper[7776]: I0219 03:10:47.721139 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" event={"ID":"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651","Type":"ContainerDied","Data":"9bdce3951fee565e17f2d28d3fa9bab8451b2a0d85b9fde5d5703fd5c2bc6773"} Feb 19 03:10:47.721931 master-0 kubenswrapper[7776]: I0219 03:10:47.721857 7776 scope.go:117] "RemoveContainer" containerID="9bdce3951fee565e17f2d28d3fa9bab8451b2a0d85b9fde5d5703fd5c2bc6773" Feb 19 03:10:47.905537 master-0 kubenswrapper[7776]: I0219 03:10:47.905472 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:48.734441 master-0 kubenswrapper[7776]: I0219 03:10:48.734331 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc"} Feb 19 03:10:48.737891 master-0 kubenswrapper[7776]: I0219 03:10:48.737799 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" event={"ID":"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651","Type":"ContainerStarted","Data":"8594432a4dd07eb45521a4c0409054b77862d25f28d0e8a9c4a239a4bc10a8ce"} Feb 19 03:10:48.740961 master-0 kubenswrapper[7776]: I0219 03:10:48.740867 7776 generic.go:334] "Generic (PLEG): container finished" podID="d6fae256-6a2e-45e7-8f2f-d471f46ad3b2" containerID="ea3fbe70d15235f707a7c57be5fd384739f1296cedb5a5f878d80b5d8be3b136" exitCode=0 Feb 19 03:10:48.740961 master-0 kubenswrapper[7776]: I0219 03:10:48.740913 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" event={"ID":"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2","Type":"ContainerDied","Data":"ea3fbe70d15235f707a7c57be5fd384739f1296cedb5a5f878d80b5d8be3b136"} Feb 19 03:10:48.741533 master-0 kubenswrapper[7776]: I0219 03:10:48.741492 7776 scope.go:117] "RemoveContainer" containerID="ea3fbe70d15235f707a7c57be5fd384739f1296cedb5a5f878d80b5d8be3b136" Feb 19 03:10:49.749437 master-0 kubenswrapper[7776]: I0219 03:10:49.749386 7776 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795" exitCode=0 Feb 19 03:10:49.750188 master-0 kubenswrapper[7776]: I0219 03:10:49.749453 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerDied","Data":"2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795"} Feb 19 03:10:49.750188 master-0 kubenswrapper[7776]: I0219 03:10:49.749544 7776 scope.go:117] "RemoveContainer" containerID="1cbe35c756f9160518273575bc2e58e01f81643b6820032d740b2e63916651c9" Feb 19 03:10:49.750462 master-0 kubenswrapper[7776]: I0219 03:10:49.750208 7776 scope.go:117] "RemoveContainer" containerID="2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795" Feb 19 03:10:49.750546 master-0 kubenswrapper[7776]: E0219 03:10:49.750489 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-fc889cfd5-866f9_openshift-kube-storage-version-migrator-operator(2b9d54aa-5f71-4a82-8e71-401ed3083a13)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" podUID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" Feb 19 03:10:49.752534 master-0 kubenswrapper[7776]: I0219 03:10:49.752487 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" event={"ID":"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2","Type":"ContainerStarted","Data":"c368982d2ddd49bae30a60f1a88dd67972d617a1660f3c61fcc533d670f74693"} Feb 19 03:10:51.687357 master-0 kubenswrapper[7776]: E0219 03:10:51.687191 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:10:52.937407 master-0 kubenswrapper[7776]: I0219 03:10:52.937336 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": read tcp 192.168.32.10:54158->192.168.32.10:10257: read: connection reset by peer" Feb 19 03:10:52.937995 master-0 kubenswrapper[7776]: I0219 03:10:52.937442 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:52.938789 master-0 kubenswrapper[7776]: I0219 03:10:52.938097 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 03:10:52.938789 master-0 kubenswrapper[7776]: I0219 03:10:52.938166 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" gracePeriod=30 Feb 19 03:10:53.089746 master-0 kubenswrapper[7776]: I0219 03:10:53.089681 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:53.782034 master-0 kubenswrapper[7776]: I0219 03:10:53.781979 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" exitCode=1 Feb 19 03:10:53.782034 master-0 kubenswrapper[7776]: I0219 03:10:53.782031 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9"} Feb 19 03:10:53.782344 master-0 kubenswrapper[7776]: I0219 03:10:53.782064 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2"} Feb 19 03:10:53.782344 master-0 kubenswrapper[7776]: I0219 03:10:53.782088 7776 scope.go:117] "RemoveContainer" containerID="b1c63f03930bd24429badfb8dc62e4fe8a94f7e1656fd1896021ad91e143b1ca" Feb 19 03:10:57.905731 master-0 kubenswrapper[7776]: I0219 03:10:57.905604 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:59.585295 master-0 kubenswrapper[7776]: I0219 03:10:59.585229 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:10:59.829401 master-0 kubenswrapper[7776]: I0219 03:10:59.829202 7776 generic.go:334] "Generic (PLEG): container finished" podID="a59746bb-7d76-4fd7-8323-5b92be63afb9" containerID="757e9a0ca78b5c9be8e7d397d2406ec6f854bb73586e71bec0887198a2e450f2" exitCode=0 Feb 19 03:10:59.829401 master-0 kubenswrapper[7776]: I0219 03:10:59.829350 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerDied","Data":"757e9a0ca78b5c9be8e7d397d2406ec6f854bb73586e71bec0887198a2e450f2"} Feb 19 03:10:59.830039 master-0 kubenswrapper[7776]: I0219 03:10:59.829976 7776 scope.go:117] "RemoveContainer" containerID="757e9a0ca78b5c9be8e7d397d2406ec6f854bb73586e71bec0887198a2e450f2" Feb 19 03:10:59.834853 master-0 kubenswrapper[7776]: I0219 03:10:59.834735 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-dcpwb_2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/cluster-node-tuning-operator/0.log" Feb 19 03:10:59.835081 master-0 kubenswrapper[7776]: I0219 03:10:59.834870 7776 generic.go:334] "Generic (PLEG): container finished" podID="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" containerID="df34220d8bbf9f2c919dd6d16618c4c0582bf76fef0068e3cc67cfd63cba32a9" exitCode=1 Feb 19 03:10:59.835081 master-0 kubenswrapper[7776]: I0219 03:10:59.834914 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" event={"ID":"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5","Type":"ContainerDied","Data":"df34220d8bbf9f2c919dd6d16618c4c0582bf76fef0068e3cc67cfd63cba32a9"} Feb 19 03:10:59.835585 master-0 kubenswrapper[7776]: I0219 03:10:59.835522 7776 scope.go:117] "RemoveContainer" containerID="df34220d8bbf9f2c919dd6d16618c4c0582bf76fef0068e3cc67cfd63cba32a9" Feb 19 03:11:00.842023 master-0 kubenswrapper[7776]: I0219 03:11:00.841902 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerStarted","Data":"075c2f17f8c40de4ef5a43e9679ffb1112b88d0d2cd16e8c3a34569ded3b80e6"} Feb 19 03:11:00.842023 master-0 kubenswrapper[7776]: I0219 03:11:00.841972 7776 scope.go:117] "RemoveContainer" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" Feb 19 03:11:00.846806 master-0 kubenswrapper[7776]: I0219 03:11:00.846751 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-dcpwb_2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/cluster-node-tuning-operator/0.log" Feb 19 03:11:00.846947 master-0 kubenswrapper[7776]: I0219 03:11:00.846819 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" event={"ID":"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5","Type":"ContainerStarted","Data":"ff254e606806d258403e28eff2fdee8567998c609505ba8d6b307cec14335600"} Feb 19 03:11:00.905767 master-0 kubenswrapper[7776]: I0219 03:11:00.905638 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:01.402251 master-0 kubenswrapper[7776]: I0219 03:11:01.402157 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:11:01.855168 master-0 kubenswrapper[7776]: I0219 03:11:01.854952 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579"} Feb 19 03:11:01.855168 master-0 kubenswrapper[7776]: I0219 03:11:01.855152 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:11:02.842938 master-0 kubenswrapper[7776]: I0219 03:11:02.842858 7776 scope.go:117] "RemoveContainer" containerID="2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795" Feb 19 03:11:03.869824 master-0 kubenswrapper[7776]: I0219 03:11:03.869701 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e"} Feb 19 03:11:04.402624 master-0 kubenswrapper[7776]: I0219 03:11:04.402526 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:04.879583 master-0 kubenswrapper[7776]: I0219 03:11:04.879520 7776 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="b96163b548b39e7368771cc78a7cc93ce0deae1acb7e2556bf2a0d6f06a4eac4" exitCode=0 Feb 19 03:11:04.879583 master-0 kubenswrapper[7776]: I0219 03:11:04.879571 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerDied","Data":"b96163b548b39e7368771cc78a7cc93ce0deae1acb7e2556bf2a0d6f06a4eac4"} Feb 19 03:11:04.880600 master-0 kubenswrapper[7776]: I0219 03:11:04.880086 7776 scope.go:117] "RemoveContainer" containerID="b96163b548b39e7368771cc78a7cc93ce0deae1acb7e2556bf2a0d6f06a4eac4" Feb 19 03:11:05.887928 master-0 kubenswrapper[7776]: I0219 03:11:05.887832 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" event={"ID":"1f9e07d3-d157-4948-84a6-04b8aa7eef4c","Type":"ContainerStarted","Data":"b3f4a77f73c67cb09a0b1a3711d7b4548b43a8d4dd59d5b931682ba668f229b0"} Feb 19 03:11:06.892608 master-0 kubenswrapper[7776]: I0219 03:11:06.892518 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:11:06.893569 master-0 kubenswrapper[7776]: I0219 03:11:06.892608 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:06.897889 master-0 kubenswrapper[7776]: I0219 03:11:06.897799 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/2.log" Feb 19 03:11:06.898846 master-0 kubenswrapper[7776]: I0219 03:11:06.898781 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/1.log" Feb 19 03:11:06.898982 master-0 kubenswrapper[7776]: I0219 03:11:06.898854 7776 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599" exitCode=255 Feb 19 03:11:06.899102 master-0 kubenswrapper[7776]: I0219 03:11:06.898973 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerDied","Data":"40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599"} Feb 19 03:11:06.899216 master-0 kubenswrapper[7776]: I0219 03:11:06.899044 7776 scope.go:117] "RemoveContainer" containerID="65990edcc46b375933fbda1eec1ec1a04dd2a02112107f18658b1af8d7458102" Feb 19 03:11:06.901602 master-0 kubenswrapper[7776]: I0219 03:11:06.900887 7776 scope.go:117] "RemoveContainer" containerID="40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599" Feb 19 03:11:06.901790 master-0 kubenswrapper[7776]: E0219 03:11:06.901707 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-c48c8bf7c-f7fvc_openshift-service-ca-operator(3edc7410-417a-4e55-9276-ac271fd52297)\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" podUID="3edc7410-417a-4e55-9276-ac271fd52297" Feb 19 03:11:06.904999 master-0 kubenswrapper[7776]: I0219 03:11:06.904926 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:11:06.905710 master-0 kubenswrapper[7776]: I0219 03:11:06.905651 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/1.log" Feb 19 03:11:06.905874 master-0 kubenswrapper[7776]: I0219 03:11:06.905738 7776 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9" exitCode=255 Feb 19 03:11:06.906024 master-0 kubenswrapper[7776]: I0219 03:11:06.905879 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerDied","Data":"61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9"} Feb 19 03:11:06.906749 master-0 kubenswrapper[7776]: I0219 03:11:06.906542 7776 scope.go:117] "RemoveContainer" containerID="61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9" Feb 19 03:11:06.906926 master-0 kubenswrapper[7776]: E0219 03:11:06.906876 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-5d87bf58c-lbfvq_openshift-kube-apiserver-operator(4714ef51-2d24-4938-8c58-80c1485a368b)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" podUID="4714ef51-2d24-4938-8c58-80c1485a368b" Feb 19 03:11:06.908424 master-0 kubenswrapper[7776]: I0219 03:11:06.908329 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/2.log" Feb 19 03:11:06.909534 master-0 kubenswrapper[7776]: I0219 03:11:06.909113 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/1.log" Feb 19 03:11:06.909534 master-0 kubenswrapper[7776]: I0219 03:11:06.909179 7776 generic.go:334] "Generic (PLEG): container finished" podID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" containerID="86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc" exitCode=255 Feb 19 03:11:06.909534 master-0 kubenswrapper[7776]: I0219 03:11:06.909275 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerDied","Data":"86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc"} Feb 19 03:11:06.910440 master-0 kubenswrapper[7776]: I0219 03:11:06.910176 7776 scope.go:117] "RemoveContainer" containerID="86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc" Feb 19 03:11:06.912146 master-0 kubenswrapper[7776]: E0219 03:11:06.911546 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=network-operator pod=network-operator-7d7db75979-jbztp_openshift-network-operator(c791d8d0-6d78-4cdc-bac2-aa39bd3aae21)\"" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" podUID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" Feb 19 03:11:06.912146 master-0 kubenswrapper[7776]: I0219 03:11:06.911734 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:11:06.912578 master-0 kubenswrapper[7776]: I0219 03:11:06.912216 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/1.log" Feb 19 03:11:06.912578 master-0 kubenswrapper[7776]: I0219 03:11:06.912285 7776 generic.go:334] "Generic (PLEG): container finished" podID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" containerID="d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464" exitCode=255 Feb 19 03:11:06.912578 master-0 kubenswrapper[7776]: I0219 03:11:06.912366 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerDied","Data":"d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464"} Feb 19 03:11:06.912966 master-0 kubenswrapper[7776]: I0219 03:11:06.912925 7776 scope.go:117] "RemoveContainer" containerID="d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464" Feb 19 03:11:06.913244 master-0 kubenswrapper[7776]: E0219 03:11:06.913203 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-77cd4d9559-w5pp8_openshift-kube-scheduler-operator(5301cbc9-b3f3-4b2d-a114-1ba0752462f1)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" podUID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" Feb 19 03:11:06.915358 master-0 kubenswrapper[7776]: I0219 03:11:06.915233 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:11:06.916086 master-0 kubenswrapper[7776]: I0219 03:11:06.916049 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/2.log" Feb 19 03:11:06.916162 master-0 kubenswrapper[7776]: I0219 03:11:06.916095 7776 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" exitCode=255 Feb 19 03:11:06.916211 master-0 kubenswrapper[7776]: I0219 03:11:06.916195 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936"} Feb 19 03:11:06.916676 master-0 kubenswrapper[7776]: I0219 03:11:06.916594 7776 scope.go:117] "RemoveContainer" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" Feb 19 03:11:06.917052 master-0 kubenswrapper[7776]: E0219 03:11:06.916826 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.918820 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.919619 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/1.log" Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.919750 7776 generic.go:334] "Generic (PLEG): container finished" podID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" containerID="a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891" exitCode=255 Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.919851 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerDied","Data":"a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891"} Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.920567 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.920623 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:06.919317 master-0 kubenswrapper[7776]: I0219 03:11:06.920660 7776 scope.go:117] "RemoveContainer" containerID="a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891" Feb 19 03:11:06.921801 master-0 kubenswrapper[7776]: E0219 03:11:06.920961 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-7bcfbc574b-k7xlc_openshift-kube-controller-manager-operator(6c9ed390-3b62-4b81-8c03-0c579a4a686a)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" podUID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" Feb 19 03:11:06.922337 master-0 kubenswrapper[7776]: I0219 03:11:06.922281 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:11:06.922740 master-0 kubenswrapper[7776]: I0219 03:11:06.922713 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/2.log" Feb 19 03:11:06.922793 master-0 kubenswrapper[7776]: I0219 03:11:06.922759 7776 generic.go:334] "Generic (PLEG): container finished" podID="05c9cb4a-5249-4116-a2e5-caa7859e2075" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" exitCode=255 Feb 19 03:11:06.922793 master-0 kubenswrapper[7776]: I0219 03:11:06.922784 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerDied","Data":"20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55"} Feb 19 03:11:06.923225 master-0 kubenswrapper[7776]: I0219 03:11:06.923102 7776 scope.go:117] "RemoveContainer" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" Feb 19 03:11:06.923352 master-0 kubenswrapper[7776]: E0219 03:11:06.923319 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" podUID="05c9cb4a-5249-4116-a2e5-caa7859e2075" Feb 19 03:11:06.928126 master-0 kubenswrapper[7776]: I0219 03:11:06.926712 7776 scope.go:117] "RemoveContainer" containerID="c2c37fa8442b4703e54aab94b6a44d53dfb0bc5765d90a9a7ef5662786b2cd74" Feb 19 03:11:06.966000 master-0 kubenswrapper[7776]: I0219 03:11:06.965896 7776 scope.go:117] "RemoveContainer" containerID="3526ed2fea950f5feea7370e198355ca1c87bb7826298c9748a04ae14fb0f72d" Feb 19 03:11:07.003136 master-0 kubenswrapper[7776]: I0219 03:11:07.002239 7776 scope.go:117] "RemoveContainer" containerID="defe4f7170cc44c3523dd8efff39d38897244ccd7ed44fbd45efb9c3c2bb106e" Feb 19 03:11:07.031351 master-0 kubenswrapper[7776]: I0219 03:11:07.031285 7776 scope.go:117] "RemoveContainer" containerID="7ee061fd852b062c9f3109c7ff8d6c80d204653976e539cbd00904008f50cdbc" Feb 19 03:11:07.053241 master-0 kubenswrapper[7776]: I0219 03:11:07.053207 7776 scope.go:117] "RemoveContainer" containerID="c26f9dd77de93381b32286d233ebe8a661621d7ab6999e089af78dc321bb05ed" Feb 19 03:11:07.083701 master-0 kubenswrapper[7776]: I0219 03:11:07.083662 7776 scope.go:117] "RemoveContainer" containerID="20d7a1f3e44571d9a483f373b1494135038a1cbd5b2640858e1087b2f468a77c" Feb 19 03:11:07.931791 master-0 kubenswrapper[7776]: I0219 03:11:07.931735 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:11:07.933766 master-0 kubenswrapper[7776]: I0219 03:11:07.933745 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/2.log" Feb 19 03:11:07.935925 master-0 kubenswrapper[7776]: I0219 03:11:07.935905 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:11:07.937475 master-0 kubenswrapper[7776]: I0219 03:11:07.937438 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:11:07.938983 master-0 kubenswrapper[7776]: I0219 03:11:07.938957 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:11:07.940466 master-0 kubenswrapper[7776]: I0219 03:11:07.940446 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:11:07.941905 master-0 kubenswrapper[7776]: I0219 03:11:07.941882 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/2.log" Feb 19 03:11:08.688826 master-0 kubenswrapper[7776]: E0219 03:11:08.688552 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:11:09.893605 master-0 kubenswrapper[7776]: I0219 03:11:09.893508 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:11:09.894174 master-0 kubenswrapper[7776]: I0219 03:11:09.893605 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:09.920056 master-0 kubenswrapper[7776]: I0219 03:11:09.919990 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:11:09.920297 master-0 kubenswrapper[7776]: I0219 03:11:09.920148 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:09.954814 master-0 kubenswrapper[7776]: I0219 03:11:09.954743 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-92gqk_18b29e37-cda9-41a8-a910-3d8f74be3cf3/service-ca-controller/1.log" Feb 19 03:11:09.955446 master-0 kubenswrapper[7776]: I0219 03:11:09.955371 7776 generic.go:334] "Generic (PLEG): container finished" podID="18b29e37-cda9-41a8-a910-3d8f74be3cf3" containerID="9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1" exitCode=255 Feb 19 03:11:09.955508 master-0 kubenswrapper[7776]: I0219 03:11:09.955454 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerDied","Data":"9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1"} Feb 19 03:11:09.955508 master-0 kubenswrapper[7776]: I0219 03:11:09.955497 7776 scope.go:117] "RemoveContainer" containerID="f411fdec6c82335e157399725224c73768983b7340cb840fe930f78c4eff8997" Feb 19 03:11:09.956242 master-0 kubenswrapper[7776]: I0219 03:11:09.956182 7776 scope.go:117] "RemoveContainer" containerID="9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1" Feb 19 03:11:09.956690 master-0 kubenswrapper[7776]: E0219 03:11:09.956634 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-controller pod=service-ca-576b4d78bd-92gqk_openshift-service-ca(18b29e37-cda9-41a8-a910-3d8f74be3cf3)\"" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" podUID="18b29e37-cda9-41a8-a910-3d8f74be3cf3" Feb 19 03:11:10.906082 master-0 kubenswrapper[7776]: I0219 03:11:10.905969 7776 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:10.962759 master-0 kubenswrapper[7776]: I0219 03:11:10.962630 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-92gqk_18b29e37-cda9-41a8-a910-3d8f74be3cf3/service-ca-controller/1.log" Feb 19 03:11:11.413709 master-0 kubenswrapper[7776]: I0219 03:11:11.413655 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:11:11.426555 master-0 kubenswrapper[7776]: I0219 03:11:11.426505 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:11:12.893708 master-0 kubenswrapper[7776]: I0219 03:11:12.893584 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:11:12.893708 master-0 kubenswrapper[7776]: I0219 03:11:12.893684 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:11:12.894531 master-0 kubenswrapper[7776]: I0219 03:11:12.893746 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:11:12.894584 master-0 kubenswrapper[7776]: I0219 03:11:12.894537 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:11:12.894639 master-0 kubenswrapper[7776]: I0219 03:11:12.894585 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579" gracePeriod=30 Feb 19 03:11:12.903223 master-0 kubenswrapper[7776]: I0219 03:11:12.903158 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:34234->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:11:12.903223 master-0 kubenswrapper[7776]: I0219 03:11:12.903221 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:34234->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:11:12.904631 master-0 kubenswrapper[7776]: I0219 03:11:12.903729 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:11:12.904631 master-0 kubenswrapper[7776]: I0219 03:11:12.903787 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:11:13.331989 master-0 kubenswrapper[7776]: E0219 03:11:13.331914 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbc2f7d0_4bae_4d4a_b041_a624ec2b9333.slice/crio-a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:11:13.985971 master-0 kubenswrapper[7776]: I0219 03:11:13.985894 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/2.log" Feb 19 03:11:13.987006 master-0 kubenswrapper[7776]: I0219 03:11:13.986927 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579" exitCode=255 Feb 19 03:11:13.987065 master-0 kubenswrapper[7776]: I0219 03:11:13.986995 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579"} Feb 19 03:11:13.987161 master-0 kubenswrapper[7776]: I0219 03:11:13.987118 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"2a9ccd68c71b55517a0af025c793f340d5a13b8dd01aa9526d809fbaf1a82b89"} Feb 19 03:11:13.987199 master-0 kubenswrapper[7776]: I0219 03:11:13.987167 7776 scope.go:117] "RemoveContainer" containerID="b94ac180c85fc64700e5f51d1991f701623c14fa47c5cdb818d4e8a2ca91669a" Feb 19 03:11:13.987446 master-0 kubenswrapper[7776]: I0219 03:11:13.987412 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:11:13.990756 master-0 kubenswrapper[7776]: I0219 03:11:13.989987 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/0.log" Feb 19 03:11:13.991863 master-0 kubenswrapper[7776]: I0219 03:11:13.991818 7776 generic.go:334] "Generic (PLEG): container finished" podID="98ac5423-b231-44e5-9545-424d635ed6ee" containerID="fe4faf0d4ffb2ebe11ee7bb3c950e62a3098091a94099dff9022e530a80d494a" exitCode=1 Feb 19 03:11:13.991928 master-0 kubenswrapper[7776]: I0219 03:11:13.991901 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerDied","Data":"fe4faf0d4ffb2ebe11ee7bb3c950e62a3098091a94099dff9022e530a80d494a"} Feb 19 03:11:13.992338 master-0 kubenswrapper[7776]: I0219 03:11:13.992300 7776 scope.go:117] "RemoveContainer" containerID="fe4faf0d4ffb2ebe11ee7bb3c950e62a3098091a94099dff9022e530a80d494a" Feb 19 03:11:14.002180 master-0 kubenswrapper[7776]: I0219 03:11:14.002128 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:11:14.002894 master-0 kubenswrapper[7776]: I0219 03:11:14.002849 7776 generic.go:334] "Generic (PLEG): container finished" podID="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" containerID="a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640" exitCode=255 Feb 19 03:11:14.002947 master-0 kubenswrapper[7776]: I0219 03:11:14.002903 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerDied","Data":"a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640"} Feb 19 03:11:14.003783 master-0 kubenswrapper[7776]: I0219 03:11:14.003747 7776 scope.go:117] "RemoveContainer" containerID="a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640" Feb 19 03:11:14.004246 master-0 kubenswrapper[7776]: E0219 03:11:14.004159 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-8586dccc9b-mcz8l_openshift-apiserver-operator(fbc2f7d0-4bae-4d4a-b041-a624ec2b9333)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" podUID="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" Feb 19 03:11:14.021190 master-0 kubenswrapper[7776]: I0219 03:11:14.021093 7776 scope.go:117] "RemoveContainer" containerID="e01bf7d4c559915b2a5ff79bf9dc359fe2aeec2863993dd1c97dd95da4862d3c" Feb 19 03:11:15.010742 master-0 kubenswrapper[7776]: I0219 03:11:15.010638 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/0.log" Feb 19 03:11:15.011606 master-0 kubenswrapper[7776]: I0219 03:11:15.011090 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1"} Feb 19 03:11:15.012380 master-0 kubenswrapper[7776]: I0219 03:11:15.012332 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:11:15.013421 master-0 kubenswrapper[7776]: I0219 03:11:15.013378 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:11:15.016630 master-0 kubenswrapper[7776]: I0219 03:11:15.016592 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/2.log" Feb 19 03:11:16.027606 master-0 kubenswrapper[7776]: I0219 03:11:16.027493 7776 generic.go:334] "Generic (PLEG): container finished" podID="ac7a5635-30b4-4076-babb-db1abd26da88" containerID="28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706" exitCode=0 Feb 19 03:11:16.028518 master-0 kubenswrapper[7776]: I0219 03:11:16.027646 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerDied","Data":"28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706"} Feb 19 03:11:16.029280 master-0 kubenswrapper[7776]: I0219 03:11:16.029194 7776 scope.go:117] "RemoveContainer" containerID="28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706" Feb 19 03:11:16.031397 master-0 kubenswrapper[7776]: I0219 03:11:16.031331 7776 generic.go:334] "Generic (PLEG): container finished" podID="61abb34a-08f0-4438-9a89-c712b2048878" containerID="0433548866cd3801c8b397fe3536ec33408d7af2a4a96c584b21e1d45a8f492e" exitCode=0 Feb 19 03:11:16.031531 master-0 kubenswrapper[7776]: I0219 03:11:16.031445 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerDied","Data":"0433548866cd3801c8b397fe3536ec33408d7af2a4a96c584b21e1d45a8f492e"} Feb 19 03:11:16.032135 master-0 kubenswrapper[7776]: I0219 03:11:16.032078 7776 scope.go:117] "RemoveContainer" containerID="0433548866cd3801c8b397fe3536ec33408d7af2a4a96c584b21e1d45a8f492e" Feb 19 03:11:17.046032 master-0 kubenswrapper[7776]: I0219 03:11:17.045906 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerStarted","Data":"e967e4bdcd17904293fe64ffaea6f290221329babeb23091aec673f02b8e7ca3"} Feb 19 03:11:17.049062 master-0 kubenswrapper[7776]: I0219 03:11:17.048995 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerStarted","Data":"30c30ae58bac1ba564b708437a7988f71fa6bcce49d387d7985db2d5834df1d5"} Feb 19 03:11:17.049533 master-0 kubenswrapper[7776]: I0219 03:11:17.049473 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:11:17.057362 master-0 kubenswrapper[7776]: I0219 03:11:17.057297 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:11:17.914297 master-0 kubenswrapper[7776]: I0219 03:11:17.914174 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:11:17.919438 master-0 kubenswrapper[7776]: I0219 03:11:17.919348 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:11:17.928969 master-0 kubenswrapper[7776]: I0219 03:11:17.928899 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:11:18.843411 master-0 kubenswrapper[7776]: I0219 03:11:18.843352 7776 scope.go:117] "RemoveContainer" containerID="86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc" Feb 19 03:11:18.844152 master-0 kubenswrapper[7776]: I0219 03:11:18.843511 7776 scope.go:117] "RemoveContainer" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" Feb 19 03:11:18.844152 master-0 kubenswrapper[7776]: I0219 03:11:18.843591 7776 scope.go:117] "RemoveContainer" containerID="40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599" Feb 19 03:11:18.844152 master-0 kubenswrapper[7776]: E0219 03:11:18.843638 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=network-operator pod=network-operator-7d7db75979-jbztp_openshift-network-operator(c791d8d0-6d78-4cdc-bac2-aa39bd3aae21)\"" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" podUID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" Feb 19 03:11:18.844152 master-0 kubenswrapper[7776]: E0219 03:11:18.843782 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" podUID="05c9cb4a-5249-4116-a2e5-caa7859e2075" Feb 19 03:11:18.844152 master-0 kubenswrapper[7776]: E0219 03:11:18.843850 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-c48c8bf7c-f7fvc_openshift-service-ca-operator(3edc7410-417a-4e55-9276-ac271fd52297)\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" podUID="3edc7410-417a-4e55-9276-ac271fd52297" Feb 19 03:11:19.843623 master-0 kubenswrapper[7776]: I0219 03:11:19.843543 7776 scope.go:117] "RemoveContainer" containerID="a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891" Feb 19 03:11:19.844585 master-0 kubenswrapper[7776]: E0219 03:11:19.843935 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-7bcfbc574b-k7xlc_openshift-kube-controller-manager-operator(6c9ed390-3b62-4b81-8c03-0c579a4a686a)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" podUID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" Feb 19 03:11:19.844585 master-0 kubenswrapper[7776]: I0219 03:11:19.844319 7776 scope.go:117] "RemoveContainer" containerID="d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464" Feb 19 03:11:19.844585 master-0 kubenswrapper[7776]: E0219 03:11:19.844538 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-77cd4d9559-w5pp8_openshift-kube-scheduler-operator(5301cbc9-b3f3-4b2d-a114-1ba0752462f1)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" podUID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" Feb 19 03:11:20.842754 master-0 kubenswrapper[7776]: I0219 03:11:20.842671 7776 scope.go:117] "RemoveContainer" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" Feb 19 03:11:20.843005 master-0 kubenswrapper[7776]: I0219 03:11:20.842818 7776 scope.go:117] "RemoveContainer" containerID="61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9" Feb 19 03:11:20.843056 master-0 kubenswrapper[7776]: E0219 03:11:20.843027 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" Feb 19 03:11:20.843126 master-0 kubenswrapper[7776]: E0219 03:11:20.843032 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-5d87bf58c-lbfvq_openshift-kube-apiserver-operator(4714ef51-2d24-4938-8c58-80c1485a368b)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" podUID="4714ef51-2d24-4938-8c58-80c1485a368b" Feb 19 03:11:23.844959 master-0 kubenswrapper[7776]: I0219 03:11:23.844879 7776 scope.go:117] "RemoveContainer" containerID="9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1" Feb 19 03:11:24.096871 master-0 kubenswrapper[7776]: I0219 03:11:24.096691 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-92gqk_18b29e37-cda9-41a8-a910-3d8f74be3cf3/service-ca-controller/1.log" Feb 19 03:11:24.096871 master-0 kubenswrapper[7776]: I0219 03:11:24.096813 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" event={"ID":"18b29e37-cda9-41a8-a910-3d8f74be3cf3","Type":"ContainerStarted","Data":"b38f9732c5ad0fdaf85a5b0eeace05d1423b42a0f4f33da5f05d90978e47d8c3"} Feb 19 03:11:25.843157 master-0 kubenswrapper[7776]: I0219 03:11:25.843106 7776 scope.go:117] "RemoveContainer" containerID="a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640" Feb 19 03:11:26.111590 master-0 kubenswrapper[7776]: I0219 03:11:26.111462 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:11:26.111590 master-0 kubenswrapper[7776]: I0219 03:11:26.111527 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" event={"ID":"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333","Type":"ContainerStarted","Data":"2d9de381b3f8985327172868d95c736182326f4019f8e4ffb8e49b11a18482bb"} Feb 19 03:11:29.843349 master-0 kubenswrapper[7776]: I0219 03:11:29.843158 7776 scope.go:117] "RemoveContainer" containerID="86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc" Feb 19 03:11:30.144531 master-0 kubenswrapper[7776]: I0219 03:11:30.144459 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/2.log" Feb 19 03:11:30.144857 master-0 kubenswrapper[7776]: I0219 03:11:30.144545 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-jbztp" event={"ID":"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21","Type":"ContainerStarted","Data":"80fae86b6444b832936906f4af236b682b20d4c74b011ed1a2ad10c745c8d2a1"} Feb 19 03:11:30.843932 master-0 kubenswrapper[7776]: I0219 03:11:30.843311 7776 scope.go:117] "RemoveContainer" containerID="d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464" Feb 19 03:11:30.844576 master-0 kubenswrapper[7776]: I0219 03:11:30.843976 7776 scope.go:117] "RemoveContainer" containerID="a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891" Feb 19 03:11:30.844576 master-0 kubenswrapper[7776]: I0219 03:11:30.844116 7776 scope.go:117] "RemoveContainer" containerID="40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599" Feb 19 03:11:31.159992 master-0 kubenswrapper[7776]: I0219 03:11:31.159931 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:11:31.160350 master-0 kubenswrapper[7776]: I0219 03:11:31.160044 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" event={"ID":"6c9ed390-3b62-4b81-8c03-0c579a4a686a","Type":"ContainerStarted","Data":"1ea1584a50eaf2073fc08bf226a125955243b785667fd095cb8965bae60ab1da"} Feb 19 03:11:31.162831 master-0 kubenswrapper[7776]: I0219 03:11:31.162809 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/2.log" Feb 19 03:11:31.162946 master-0 kubenswrapper[7776]: I0219 03:11:31.162881 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431"} Feb 19 03:11:31.165080 master-0 kubenswrapper[7776]: I0219 03:11:31.165021 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:11:31.165174 master-0 kubenswrapper[7776]: I0219 03:11:31.165116 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" event={"ID":"5301cbc9-b3f3-4b2d-a114-1ba0752462f1","Type":"ContainerStarted","Data":"884bbb0c6a080e3a39941119b143247d39406ba0108a748fdc28ca8ad12d533d"} Feb 19 03:11:33.845594 master-0 kubenswrapper[7776]: I0219 03:11:33.845494 7776 scope.go:117] "RemoveContainer" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" Feb 19 03:11:33.845594 master-0 kubenswrapper[7776]: I0219 03:11:33.845583 7776 scope.go:117] "RemoveContainer" containerID="61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9" Feb 19 03:11:33.846373 master-0 kubenswrapper[7776]: E0219 03:11:33.845816 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" podUID="05c9cb4a-5249-4116-a2e5-caa7859e2075" Feb 19 03:11:34.187293 master-0 kubenswrapper[7776]: I0219 03:11:34.187228 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:11:34.187714 master-0 kubenswrapper[7776]: I0219 03:11:34.187494 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130"} Feb 19 03:11:35.295301 master-0 kubenswrapper[7776]: I0219 03:11:35.295136 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:11:35.296396 master-0 kubenswrapper[7776]: I0219 03:11:35.296050 7776 scope.go:117] "RemoveContainer" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" Feb 19 03:11:35.296533 master-0 kubenswrapper[7776]: E0219 03:11:35.296389 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" Feb 19 03:11:37.006287 master-0 kubenswrapper[7776]: I0219 03:11:37.006205 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj"] Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: E0219 03:11:37.006573 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66ac991-af58-490b-8909-e518d301e1b8" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.006593 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66ac991-af58-490b-8909-e518d301e1b8" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: E0219 03:11:37.006618 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.006631 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: E0219 03:11:37.008478 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.008505 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.008718 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66ac991-af58-490b-8909-e518d301e1b8" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.008739 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:11:37.009038 master-0 kubenswrapper[7776]: I0219 03:11:37.008760 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba1213d-8a7d-4b99-857f-b66578cc2bec" containerName="installer" Feb 19 03:11:37.009794 master-0 kubenswrapper[7776]: I0219 03:11:37.009649 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.011971 master-0 kubenswrapper[7776]: I0219 03:11:37.011386 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 19 03:11:37.012084 master-0 kubenswrapper[7776]: I0219 03:11:37.012065 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 19 03:11:37.017504 master-0 kubenswrapper[7776]: I0219 03:11:37.017462 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7"] Feb 19 03:11:37.026496 master-0 kubenswrapper[7776]: I0219 03:11:37.026011 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.052280 master-0 kubenswrapper[7776]: I0219 03:11:37.033771 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 19 03:11:37.052280 master-0 kubenswrapper[7776]: I0219 03:11:37.033818 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 19 03:11:37.052280 master-0 kubenswrapper[7776]: I0219 03:11:37.034065 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 19 03:11:37.052280 master-0 kubenswrapper[7776]: I0219 03:11:37.034229 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 19 03:11:37.059277 master-0 kubenswrapper[7776]: I0219 03:11:37.054220 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj"] Feb 19 03:11:37.077280 master-0 kubenswrapper[7776]: I0219 03:11:37.075843 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7"] Feb 19 03:11:37.113930 master-0 kubenswrapper[7776]: I0219 03:11:37.113871 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj"] Feb 19 03:11:37.123295 master-0 kubenswrapper[7776]: I0219 03:11:37.114727 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.123295 master-0 kubenswrapper[7776]: I0219 03:11:37.122859 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 03:11:37.123295 master-0 kubenswrapper[7776]: I0219 03:11:37.123118 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 03:11:37.123295 master-0 kubenswrapper[7776]: I0219 03:11:37.123175 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 03:11:37.123571 master-0 kubenswrapper[7776]: I0219 03:11:37.123312 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 03:11:37.123571 master-0 kubenswrapper[7776]: I0219 03:11:37.123427 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 03:11:37.127815 master-0 kubenswrapper[7776]: I0219 03:11:37.126314 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-2dvkr"] Feb 19 03:11:37.127815 master-0 kubenswrapper[7776]: I0219 03:11:37.127188 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.131637 master-0 kubenswrapper[7776]: I0219 03:11:37.130423 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874"] Feb 19 03:11:37.131637 master-0 kubenswrapper[7776]: I0219 03:11:37.131532 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.132275 master-0 kubenswrapper[7776]: I0219 03:11:37.132232 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 19 03:11:37.132697 master-0 kubenswrapper[7776]: I0219 03:11:37.132658 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 19 03:11:37.132782 master-0 kubenswrapper[7776]: I0219 03:11:37.132759 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 19 03:11:37.134001 master-0 kubenswrapper[7776]: I0219 03:11:37.132918 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 19 03:11:37.134001 master-0 kubenswrapper[7776]: I0219 03:11:37.133092 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 19 03:11:37.135002 master-0 kubenswrapper[7776]: I0219 03:11:37.134966 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 03:11:37.135311 master-0 kubenswrapper[7776]: I0219 03:11:37.135285 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 03:11:37.135666 master-0 kubenswrapper[7776]: I0219 03:11:37.135612 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 03:11:37.136269 master-0 kubenswrapper[7776]: I0219 03:11:37.136235 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn"] Feb 19 03:11:37.137159 master-0 kubenswrapper[7776]: I0219 03:11:37.137138 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.142585 master-0 kubenswrapper[7776]: I0219 03:11:37.139041 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 19 03:11:37.142585 master-0 kubenswrapper[7776]: I0219 03:11:37.139170 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 19 03:11:37.142585 master-0 kubenswrapper[7776]: I0219 03:11:37.139276 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 19 03:11:37.142585 master-0 kubenswrapper[7776]: I0219 03:11:37.139386 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 19 03:11:37.152784 master-0 kubenswrapper[7776]: I0219 03:11:37.152733 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874"] Feb 19 03:11:37.153362 master-0 kubenswrapper[7776]: I0219 03:11:37.153334 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.153609 master-0 kubenswrapper[7776]: I0219 03:11:37.153421 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.154195 master-0 kubenswrapper[7776]: I0219 03:11:37.154165 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2v2\" (UniqueName: \"kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.154293 master-0 kubenswrapper[7776]: I0219 03:11:37.154269 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.154368 master-0 kubenswrapper[7776]: I0219 03:11:37.154344 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.155487 master-0 kubenswrapper[7776]: I0219 03:11:37.154427 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwbk\" (UniqueName: \"kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.155487 master-0 kubenswrapper[7776]: I0219 03:11:37.154528 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.156103 master-0 kubenswrapper[7776]: I0219 03:11:37.156030 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.160956 master-0 kubenswrapper[7776]: I0219 03:11:37.159318 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-2dvkr"] Feb 19 03:11:37.162757 master-0 kubenswrapper[7776]: I0219 03:11:37.162709 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn"] Feb 19 03:11:37.257660 master-0 kubenswrapper[7776]: I0219 03:11:37.257499 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.257660 master-0 kubenswrapper[7776]: I0219 03:11:37.257594 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.257660 master-0 kubenswrapper[7776]: I0219 03:11:37.257626 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.257660 master-0 kubenswrapper[7776]: I0219 03:11:37.257650 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc87d\" (UniqueName: \"kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.257965 master-0 kubenswrapper[7776]: I0219 03:11:37.257742 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.257965 master-0 kubenswrapper[7776]: I0219 03:11:37.257767 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.257965 master-0 kubenswrapper[7776]: I0219 03:11:37.257915 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.257965 master-0 kubenswrapper[7776]: I0219 03:11:37.257943 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.258407 master-0 kubenswrapper[7776]: I0219 03:11:37.258350 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq48l\" (UniqueName: \"kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.258478 master-0 kubenswrapper[7776]: I0219 03:11:37.258456 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.258651 master-0 kubenswrapper[7776]: I0219 03:11:37.258634 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2v2\" (UniqueName: \"kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.258711 master-0 kubenswrapper[7776]: E0219 03:11:37.258658 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:37.258757 master-0 kubenswrapper[7776]: I0219 03:11:37.258674 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.258794 master-0 kubenswrapper[7776]: E0219 03:11:37.258764 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:37.758738421 +0000 UTC m=+404.098422989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:37.258858 master-0 kubenswrapper[7776]: I0219 03:11:37.258834 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.258892 master-0 kubenswrapper[7776]: I0219 03:11:37.258870 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.258939 master-0 kubenswrapper[7776]: I0219 03:11:37.258913 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.258995 master-0 kubenswrapper[7776]: I0219 03:11:37.258965 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.260785 master-0 kubenswrapper[7776]: I0219 03:11:37.260739 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.260847 master-0 kubenswrapper[7776]: I0219 03:11:37.260813 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.260904 master-0 kubenswrapper[7776]: I0219 03:11:37.260885 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwbk\" (UniqueName: \"kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.260953 master-0 kubenswrapper[7776]: I0219 03:11:37.260930 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.261004 master-0 kubenswrapper[7776]: I0219 03:11:37.260987 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.261036 master-0 kubenswrapper[7776]: I0219 03:11:37.261022 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghmn\" (UniqueName: \"kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.261066 master-0 kubenswrapper[7776]: I0219 03:11:37.261058 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.261184 master-0 kubenswrapper[7776]: I0219 03:11:37.261162 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldf2\" (UniqueName: \"kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.261950 master-0 kubenswrapper[7776]: I0219 03:11:37.261894 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.264625 master-0 kubenswrapper[7776]: I0219 03:11:37.264571 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.273278 master-0 kubenswrapper[7776]: I0219 03:11:37.273217 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.275942 master-0 kubenswrapper[7776]: I0219 03:11:37.275893 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2v2\" (UniqueName: \"kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.276533 master-0 kubenswrapper[7776]: I0219 03:11:37.276505 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwbk\" (UniqueName: \"kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.362381 master-0 kubenswrapper[7776]: I0219 03:11:37.362324 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.362381 master-0 kubenswrapper[7776]: I0219 03:11:37.362388 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.362643 master-0 kubenswrapper[7776]: I0219 03:11:37.362416 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghmn\" (UniqueName: \"kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.362643 master-0 kubenswrapper[7776]: I0219 03:11:37.362607 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.362901 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldf2\" (UniqueName: \"kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363022 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: E0219 03:11:37.363142 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363145 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: E0219 03:11:37.363214 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:37.863195315 +0000 UTC m=+404.202879933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363279 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363338 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc87d\" (UniqueName: \"kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363290 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: E0219 03:11:37.363368 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363449 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: E0219 03:11:37.363472 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:11:37.863463293 +0000 UTC m=+404.203147811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363579 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.363764 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.364101 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.365745 master-0 kubenswrapper[7776]: I0219 03:11:37.365726 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq48l\" (UniqueName: \"kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.366226 master-0 kubenswrapper[7776]: I0219 03:11:37.365778 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.366226 master-0 kubenswrapper[7776]: I0219 03:11:37.365842 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.366226 master-0 kubenswrapper[7776]: I0219 03:11:37.365861 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.366522 master-0 kubenswrapper[7776]: E0219 03:11:37.366497 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:37.366576 master-0 kubenswrapper[7776]: I0219 03:11:37.366517 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.366609 master-0 kubenswrapper[7776]: E0219 03:11:37.366589 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:37.866546353 +0000 UTC m=+404.206230881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:37.367288 master-0 kubenswrapper[7776]: I0219 03:11:37.367244 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.367624 master-0 kubenswrapper[7776]: I0219 03:11:37.367587 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.369496 master-0 kubenswrapper[7776]: I0219 03:11:37.369450 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9"] Feb 19 03:11:37.370280 master-0 kubenswrapper[7776]: I0219 03:11:37.370238 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.374066 master-0 kubenswrapper[7776]: I0219 03:11:37.374045 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 19 03:11:37.383060 master-0 kubenswrapper[7776]: I0219 03:11:37.383019 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldf2\" (UniqueName: \"kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.383429 master-0 kubenswrapper[7776]: I0219 03:11:37.383404 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghmn\" (UniqueName: \"kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.392043 master-0 kubenswrapper[7776]: I0219 03:11:37.391984 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9"] Feb 19 03:11:37.392624 master-0 kubenswrapper[7776]: I0219 03:11:37.392544 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc87d\" (UniqueName: \"kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.403078 master-0 kubenswrapper[7776]: I0219 03:11:37.403021 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq48l\" (UniqueName: \"kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.408382 master-0 kubenswrapper[7776]: I0219 03:11:37.408302 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:11:37.451620 master-0 kubenswrapper[7776]: I0219 03:11:37.451572 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:11:37.468171 master-0 kubenswrapper[7776]: I0219 03:11:37.465996 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7"] Feb 19 03:11:37.468171 master-0 kubenswrapper[7776]: I0219 03:11:37.466589 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hzx\" (UniqueName: \"kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.468171 master-0 kubenswrapper[7776]: I0219 03:11:37.466665 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.468171 master-0 kubenswrapper[7776]: I0219 03:11:37.466818 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.470866 master-0 kubenswrapper[7776]: I0219 03:11:37.469812 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 03:11:37.470866 master-0 kubenswrapper[7776]: I0219 03:11:37.470062 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 03:11:37.470866 master-0 kubenswrapper[7776]: I0219 03:11:37.470082 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 03:11:37.471372 master-0 kubenswrapper[7776]: I0219 03:11:37.471342 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 03:11:37.472695 master-0 kubenswrapper[7776]: I0219 03:11:37.472640 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 03:11:37.485847 master-0 kubenswrapper[7776]: I0219 03:11:37.478185 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7"] Feb 19 03:11:37.568431 master-0 kubenswrapper[7776]: I0219 03:11:37.568375 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.568592 master-0 kubenswrapper[7776]: I0219 03:11:37.568445 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.568592 master-0 kubenswrapper[7776]: I0219 03:11:37.568478 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hzx\" (UniqueName: \"kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.568765 master-0 kubenswrapper[7776]: I0219 03:11:37.568689 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.568822 master-0 kubenswrapper[7776]: I0219 03:11:37.568789 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.568874 master-0 kubenswrapper[7776]: I0219 03:11:37.568858 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zxf\" (UniqueName: \"kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.572081 master-0 kubenswrapper[7776]: I0219 03:11:37.572052 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.586519 master-0 kubenswrapper[7776]: I0219 03:11:37.586471 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hzx\" (UniqueName: \"kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.670679 master-0 kubenswrapper[7776]: I0219 03:11:37.670622 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zxf\" (UniqueName: \"kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.670906 master-0 kubenswrapper[7776]: I0219 03:11:37.670824 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.670906 master-0 kubenswrapper[7776]: I0219 03:11:37.670894 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.671096 master-0 kubenswrapper[7776]: I0219 03:11:37.670965 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.672044 master-0 kubenswrapper[7776]: I0219 03:11:37.672008 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.672602 master-0 kubenswrapper[7776]: I0219 03:11:37.672375 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.675509 master-0 kubenswrapper[7776]: I0219 03:11:37.675473 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.688144 master-0 kubenswrapper[7776]: I0219 03:11:37.688089 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zxf\" (UniqueName: \"kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.742490 master-0 kubenswrapper[7776]: I0219 03:11:37.742390 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:11:37.772617 master-0 kubenswrapper[7776]: I0219 03:11:37.772570 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:37.772812 master-0 kubenswrapper[7776]: E0219 03:11:37.772758 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:37.772855 master-0 kubenswrapper[7776]: E0219 03:11:37.772819 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:38.772804124 +0000 UTC m=+405.112488642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:37.804425 master-0 kubenswrapper[7776]: I0219 03:11:37.803880 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:11:37.842103 master-0 kubenswrapper[7776]: I0219 03:11:37.842024 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7"] Feb 19 03:11:37.852020 master-0 kubenswrapper[7776]: W0219 03:11:37.851435 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5828ea_090f_4c8f_90e6_c4e405e69ec5.slice/crio-7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1 WatchSource:0}: Error finding container 7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1: Status 404 returned error can't find the container with id 7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1 Feb 19 03:11:37.876026 master-0 kubenswrapper[7776]: I0219 03:11:37.875927 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:37.876026 master-0 kubenswrapper[7776]: I0219 03:11:37.875993 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:37.876458 master-0 kubenswrapper[7776]: I0219 03:11:37.876104 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:37.876458 master-0 kubenswrapper[7776]: E0219 03:11:37.876324 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:37.876458 master-0 kubenswrapper[7776]: E0219 03:11:37.876382 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:38.876362522 +0000 UTC m=+405.216047040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:37.876843 master-0 kubenswrapper[7776]: E0219 03:11:37.876805 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:37.876924 master-0 kubenswrapper[7776]: E0219 03:11:37.876879 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:11:38.876861197 +0000 UTC m=+405.216545715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:37.876924 master-0 kubenswrapper[7776]: E0219 03:11:37.876805 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:37.877143 master-0 kubenswrapper[7776]: E0219 03:11:37.877106 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:38.876915858 +0000 UTC m=+405.216600456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:37.903320 master-0 kubenswrapper[7776]: I0219 03:11:37.902950 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-2dvkr"] Feb 19 03:11:38.130514 master-0 kubenswrapper[7776]: I0219 03:11:38.130177 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9"] Feb 19 03:11:38.136526 master-0 kubenswrapper[7776]: W0219 03:11:38.136490 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494087b2_b532_4c62_89d5_b88a152fa5db.slice/crio-48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70 WatchSource:0}: Error finding container 48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70: Status 404 returned error can't find the container with id 48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70 Feb 19 03:11:38.208916 master-0 kubenswrapper[7776]: I0219 03:11:38.208854 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p"] Feb 19 03:11:38.210598 master-0 kubenswrapper[7776]: I0219 03:11:38.209824 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.214661 master-0 kubenswrapper[7776]: I0219 03:11:38.211787 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:11:38.214661 master-0 kubenswrapper[7776]: I0219 03:11:38.212370 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 19 03:11:38.214661 master-0 kubenswrapper[7776]: I0219 03:11:38.212592 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:11:38.214661 master-0 kubenswrapper[7776]: I0219 03:11:38.212685 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 19 03:11:38.215419 master-0 kubenswrapper[7776]: I0219 03:11:38.215376 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 19 03:11:38.224272 master-0 kubenswrapper[7776]: I0219 03:11:38.222228 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1"} Feb 19 03:11:38.231341 master-0 kubenswrapper[7776]: I0219 03:11:38.226510 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" event={"ID":"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4","Type":"ContainerStarted","Data":"3b52f4ccabc096d80ff39ba947c7023e50c18db78664ec7aa1e9ea4675a4b974"} Feb 19 03:11:38.231341 master-0 kubenswrapper[7776]: I0219 03:11:38.228138 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" event={"ID":"494087b2-b532-4c62-89d5-b88a152fa5db","Type":"ContainerStarted","Data":"48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70"} Feb 19 03:11:38.243964 master-0 kubenswrapper[7776]: I0219 03:11:38.243755 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7"] Feb 19 03:11:38.249038 master-0 kubenswrapper[7776]: W0219 03:11:38.248980 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf1a1c6_f858_4f89_ac8c_97d13ed8a962.slice/crio-215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada WatchSource:0}: Error finding container 215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada: Status 404 returned error can't find the container with id 215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada Feb 19 03:11:38.382871 master-0 kubenswrapper[7776]: I0219 03:11:38.382806 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.382954 master-0 kubenswrapper[7776]: I0219 03:11:38.382879 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.383046 master-0 kubenswrapper[7776]: I0219 03:11:38.383025 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.383097 master-0 kubenswrapper[7776]: I0219 03:11:38.383060 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7knmd\" (UniqueName: \"kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.383202 master-0 kubenswrapper[7776]: I0219 03:11:38.383171 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484538 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484590 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7knmd\" (UniqueName: \"kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484642 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484686 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484708 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485223 master-0 kubenswrapper[7776]: I0219 03:11:38.484819 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.485641 master-0 kubenswrapper[7776]: I0219 03:11:38.485447 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.488670 master-0 kubenswrapper[7776]: I0219 03:11:38.486481 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.488670 master-0 kubenswrapper[7776]: I0219 03:11:38.488473 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.503549 master-0 kubenswrapper[7776]: I0219 03:11:38.502206 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7knmd\" (UniqueName: \"kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.528284 master-0 kubenswrapper[7776]: I0219 03:11:38.528216 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:11:38.547507 master-0 kubenswrapper[7776]: W0219 03:11:38.547451 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72a6892f_5a69_434b_9dea_11ad5de62a40.slice/crio-bb034bf4a9cdadabbefc696317954b87b73697b914e5e75bb4ca97aab23c5ac6 WatchSource:0}: Error finding container bb034bf4a9cdadabbefc696317954b87b73697b914e5e75bb4ca97aab23c5ac6: Status 404 returned error can't find the container with id bb034bf4a9cdadabbefc696317954b87b73697b914e5e75bb4ca97aab23c5ac6 Feb 19 03:11:38.741570 master-0 kubenswrapper[7776]: I0219 03:11:38.741267 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7"] Feb 19 03:11:38.742277 master-0 kubenswrapper[7776]: I0219 03:11:38.742234 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.744592 master-0 kubenswrapper[7776]: I0219 03:11:38.744558 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 03:11:38.744665 master-0 kubenswrapper[7776]: I0219 03:11:38.744599 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 03:11:38.744665 master-0 kubenswrapper[7776]: I0219 03:11:38.744649 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 03:11:38.750599 master-0 kubenswrapper[7776]: I0219 03:11:38.750376 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7"] Feb 19 03:11:38.790212 master-0 kubenswrapper[7776]: I0219 03:11:38.790074 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:38.790436 master-0 kubenswrapper[7776]: E0219 03:11:38.790301 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:38.790436 master-0 kubenswrapper[7776]: E0219 03:11:38.790371 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:40.790352962 +0000 UTC m=+407.130037480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:38.891731 master-0 kubenswrapper[7776]: I0219 03:11:38.891685 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.891731 master-0 kubenswrapper[7776]: I0219 03:11:38.891747 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: I0219 03:11:38.891780 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: I0219 03:11:38.891807 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: I0219 03:11:38.891850 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: I0219 03:11:38.891898 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: I0219 03:11:38.891927 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: E0219 03:11:38.892057 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:38.892102 master-0 kubenswrapper[7776]: E0219 03:11:38.892103 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:40.892089397 +0000 UTC m=+407.231773915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:38.893029 master-0 kubenswrapper[7776]: E0219 03:11:38.892420 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:38.893029 master-0 kubenswrapper[7776]: E0219 03:11:38.892470 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:38.893029 master-0 kubenswrapper[7776]: E0219 03:11:38.892503 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:40.892482799 +0000 UTC m=+407.232167367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:38.893029 master-0 kubenswrapper[7776]: E0219 03:11:38.892553 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:11:40.89254391 +0000 UTC m=+407.232228538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:38.993582 master-0 kubenswrapper[7776]: I0219 03:11:38.993481 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.993582 master-0 kubenswrapper[7776]: I0219 03:11:38.993555 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.993829 master-0 kubenswrapper[7776]: I0219 03:11:38.993729 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.994009 master-0 kubenswrapper[7776]: I0219 03:11:38.993961 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.994150 master-0 kubenswrapper[7776]: E0219 03:11:38.994110 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:38.994214 master-0 kubenswrapper[7776]: E0219 03:11:38.994179 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:11:39.494162362 +0000 UTC m=+405.833846950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:38.994821 master-0 kubenswrapper[7776]: I0219 03:11:38.994785 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:38.995766 master-0 kubenswrapper[7776]: I0219 03:11:38.995726 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:39.011713 master-0 kubenswrapper[7776]: I0219 03:11:39.011646 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:39.234039 master-0 kubenswrapper[7776]: I0219 03:11:39.233982 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerStarted","Data":"bb034bf4a9cdadabbefc696317954b87b73697b914e5e75bb4ca97aab23c5ac6"} Feb 19 03:11:39.236030 master-0 kubenswrapper[7776]: I0219 03:11:39.235992 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" event={"ID":"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962","Type":"ContainerStarted","Data":"ff9d0ff3f2a4c9fc925db3ddd32a6ef0bff9e55ac029a05abbe7745468d35641"} Feb 19 03:11:39.236030 master-0 kubenswrapper[7776]: I0219 03:11:39.236023 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" event={"ID":"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962","Type":"ContainerStarted","Data":"011cfde9008766c177372c5031ca9481b3cdda6d27924850f8b618812cd3fbcc"} Feb 19 03:11:39.236146 master-0 kubenswrapper[7776]: I0219 03:11:39.236035 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" event={"ID":"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962","Type":"ContainerStarted","Data":"215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada"} Feb 19 03:11:39.257917 master-0 kubenswrapper[7776]: I0219 03:11:39.257771 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" podStartSLOduration=2.2577547510000002 podStartE2EDuration="2.257754751s" podCreationTimestamp="2026-02-19 03:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:11:39.254587478 +0000 UTC m=+405.594272006" watchObservedRunningTime="2026-02-19 03:11:39.257754751 +0000 UTC m=+405.597439269" Feb 19 03:11:39.503023 master-0 kubenswrapper[7776]: I0219 03:11:39.502969 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:39.503335 master-0 kubenswrapper[7776]: E0219 03:11:39.503158 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:39.503480 master-0 kubenswrapper[7776]: E0219 03:11:39.503466 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:11:40.503444386 +0000 UTC m=+406.843128904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:40.514648 master-0 kubenswrapper[7776]: I0219 03:11:40.514590 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:40.515400 master-0 kubenswrapper[7776]: E0219 03:11:40.514733 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:40.515400 master-0 kubenswrapper[7776]: E0219 03:11:40.514784 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:11:42.514771139 +0000 UTC m=+408.854455657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:40.817669 master-0 kubenswrapper[7776]: I0219 03:11:40.817548 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:40.817876 master-0 kubenswrapper[7776]: E0219 03:11:40.817790 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:40.817922 master-0 kubenswrapper[7776]: E0219 03:11:40.817905 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:44.817882304 +0000 UTC m=+411.157566862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:40.919148 master-0 kubenswrapper[7776]: I0219 03:11:40.919073 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: I0219 03:11:40.919218 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: E0219 03:11:40.919283 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: E0219 03:11:40.919391 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:44.919367594 +0000 UTC m=+411.259052182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: E0219 03:11:40.919394 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: I0219 03:11:40.919426 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:40.919567 master-0 kubenswrapper[7776]: E0219 03:11:40.919480 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:44.919455477 +0000 UTC m=+411.259140035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:40.919784 master-0 kubenswrapper[7776]: E0219 03:11:40.919588 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:40.919784 master-0 kubenswrapper[7776]: E0219 03:11:40.919638 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:11:44.919626452 +0000 UTC m=+411.259311020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:42.252548 master-0 kubenswrapper[7776]: I0219 03:11:42.252437 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/0.log" Feb 19 03:11:42.253639 master-0 kubenswrapper[7776]: I0219 03:11:42.253614 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="40490dd0e563d6d3bacaae7a09fc0ef24b1c225b4cabd1f9ac5bc18d2fd4cabf" exitCode=1 Feb 19 03:11:42.253715 master-0 kubenswrapper[7776]: I0219 03:11:42.253666 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"40490dd0e563d6d3bacaae7a09fc0ef24b1c225b4cabd1f9ac5bc18d2fd4cabf"} Feb 19 03:11:42.253715 master-0 kubenswrapper[7776]: I0219 03:11:42.253693 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerStarted","Data":"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037"} Feb 19 03:11:42.253715 master-0 kubenswrapper[7776]: I0219 03:11:42.253703 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerStarted","Data":"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b"} Feb 19 03:11:42.254281 master-0 kubenswrapper[7776]: I0219 03:11:42.254247 7776 scope.go:117] "RemoveContainer" containerID="40490dd0e563d6d3bacaae7a09fc0ef24b1c225b4cabd1f9ac5bc18d2fd4cabf" Feb 19 03:11:42.257404 master-0 kubenswrapper[7776]: I0219 03:11:42.257356 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" event={"ID":"494087b2-b532-4c62-89d5-b88a152fa5db","Type":"ContainerStarted","Data":"ba413ae01172b66bb47e88e18297dfdea25b5d5a7bb2302e12fd43c755cf2113"} Feb 19 03:11:42.262992 master-0 kubenswrapper[7776]: I0219 03:11:42.262961 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"02782479f74a8d6abd591485a51a2bd6e181e17733c4b8c4aea641ac36b465a6"} Feb 19 03:11:42.262992 master-0 kubenswrapper[7776]: I0219 03:11:42.262986 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"c7efec73ecd5959e325f34dc1abcbd0a0ee696d09e18dbddaa6606e552d9257d"} Feb 19 03:11:42.264495 master-0 kubenswrapper[7776]: I0219 03:11:42.264460 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" event={"ID":"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4","Type":"ContainerStarted","Data":"1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849"} Feb 19 03:11:42.369357 master-0 kubenswrapper[7776]: I0219 03:11:42.363349 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" podStartSLOduration=2.800671978 podStartE2EDuration="6.363321506s" podCreationTimestamp="2026-02-19 03:11:36 +0000 UTC" firstStartedPulling="2026-02-19 03:11:37.853559145 +0000 UTC m=+404.193243663" lastFinishedPulling="2026-02-19 03:11:41.416208643 +0000 UTC m=+407.755893191" observedRunningTime="2026-02-19 03:11:42.359868675 +0000 UTC m=+408.699553203" watchObservedRunningTime="2026-02-19 03:11:42.363321506 +0000 UTC m=+408.703006024" Feb 19 03:11:42.430345 master-0 kubenswrapper[7776]: I0219 03:11:42.427497 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" podStartSLOduration=1.9380445339999999 podStartE2EDuration="5.427480411s" podCreationTimestamp="2026-02-19 03:11:37 +0000 UTC" firstStartedPulling="2026-02-19 03:11:37.918716451 +0000 UTC m=+404.258400969" lastFinishedPulling="2026-02-19 03:11:41.408152308 +0000 UTC m=+407.747836846" observedRunningTime="2026-02-19 03:11:42.420424026 +0000 UTC m=+408.760108564" watchObservedRunningTime="2026-02-19 03:11:42.427480411 +0000 UTC m=+408.767164929" Feb 19 03:11:42.430345 master-0 kubenswrapper[7776]: I0219 03:11:42.427862 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" podStartSLOduration=2.121837815 podStartE2EDuration="5.427858362s" podCreationTimestamp="2026-02-19 03:11:37 +0000 UTC" firstStartedPulling="2026-02-19 03:11:38.140093075 +0000 UTC m=+404.479777593" lastFinishedPulling="2026-02-19 03:11:41.446113622 +0000 UTC m=+407.785798140" observedRunningTime="2026-02-19 03:11:42.402289109 +0000 UTC m=+408.741973637" watchObservedRunningTime="2026-02-19 03:11:42.427858362 +0000 UTC m=+408.767542880" Feb 19 03:11:42.552205 master-0 kubenswrapper[7776]: I0219 03:11:42.552149 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:42.552459 master-0 kubenswrapper[7776]: E0219 03:11:42.552361 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:42.552505 master-0 kubenswrapper[7776]: E0219 03:11:42.552482 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:11:46.552463996 +0000 UTC m=+412.892148514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:43.273646 master-0 kubenswrapper[7776]: I0219 03:11:43.273574 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/1.log" Feb 19 03:11:43.274450 master-0 kubenswrapper[7776]: I0219 03:11:43.274407 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/0.log" Feb 19 03:11:43.275344 master-0 kubenswrapper[7776]: I0219 03:11:43.275299 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5" exitCode=1 Feb 19 03:11:43.275437 master-0 kubenswrapper[7776]: I0219 03:11:43.275355 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5"} Feb 19 03:11:43.275437 master-0 kubenswrapper[7776]: I0219 03:11:43.275406 7776 scope.go:117] "RemoveContainer" containerID="40490dd0e563d6d3bacaae7a09fc0ef24b1c225b4cabd1f9ac5bc18d2fd4cabf" Feb 19 03:11:43.276018 master-0 kubenswrapper[7776]: I0219 03:11:43.275958 7776 scope.go:117] "RemoveContainer" containerID="5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5" Feb 19 03:11:43.276279 master-0 kubenswrapper[7776]: E0219 03:11:43.276190 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:11:44.282863 master-0 kubenswrapper[7776]: I0219 03:11:44.282783 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/1.log" Feb 19 03:11:44.284402 master-0 kubenswrapper[7776]: I0219 03:11:44.284364 7776 scope.go:117] "RemoveContainer" containerID="5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5" Feb 19 03:11:44.284600 master-0 kubenswrapper[7776]: E0219 03:11:44.284560 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:11:44.843592 master-0 kubenswrapper[7776]: I0219 03:11:44.843490 7776 scope.go:117] "RemoveContainer" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" Feb 19 03:11:44.843899 master-0 kubenswrapper[7776]: E0219 03:11:44.843840 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" podUID="05c9cb4a-5249-4116-a2e5-caa7859e2075" Feb 19 03:11:44.907551 master-0 kubenswrapper[7776]: I0219 03:11:44.907432 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:44.907840 master-0 kubenswrapper[7776]: E0219 03:11:44.907639 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:44.907840 master-0 kubenswrapper[7776]: E0219 03:11:44.907736 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:52.907701457 +0000 UTC m=+419.247386015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:45.009060 master-0 kubenswrapper[7776]: I0219 03:11:45.008951 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:45.009390 master-0 kubenswrapper[7776]: I0219 03:11:45.009184 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:45.009390 master-0 kubenswrapper[7776]: E0219 03:11:45.009235 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:45.009556 master-0 kubenswrapper[7776]: I0219 03:11:45.009242 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:45.009628 master-0 kubenswrapper[7776]: E0219 03:11:45.009415 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:53.009383854 +0000 UTC m=+419.349068402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:45.009700 master-0 kubenswrapper[7776]: E0219 03:11:45.009603 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:45.009880 master-0 kubenswrapper[7776]: E0219 03:11:45.009477 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:45.009959 master-0 kubenswrapper[7776]: E0219 03:11:45.009841 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:11:53.009773665 +0000 UTC m=+419.349458263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:45.010033 master-0 kubenswrapper[7776]: E0219 03:11:45.009974 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:11:53.00993533 +0000 UTC m=+419.349619888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:46.630735 master-0 kubenswrapper[7776]: I0219 03:11:46.630650 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:46.631229 master-0 kubenswrapper[7776]: E0219 03:11:46.630845 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:46.631229 master-0 kubenswrapper[7776]: E0219 03:11:46.630935 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:11:54.630917009 +0000 UTC m=+420.970601527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:46.915751 master-0 kubenswrapper[7776]: I0219 03:11:46.915638 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:11:47.843381 master-0 kubenswrapper[7776]: I0219 03:11:47.843331 7776 scope.go:117] "RemoveContainer" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" Feb 19 03:11:48.312140 master-0 kubenswrapper[7776]: I0219 03:11:48.312070 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:11:48.312140 master-0 kubenswrapper[7776]: I0219 03:11:48.312142 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53"} Feb 19 03:11:52.911838 master-0 kubenswrapper[7776]: I0219 03:11:52.911738 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:11:52.912541 master-0 kubenswrapper[7776]: E0219 03:11:52.912036 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:52.912541 master-0 kubenswrapper[7776]: E0219 03:11:52.912167 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:08.912136649 +0000 UTC m=+435.251821197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: I0219 03:11:53.013746 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: I0219 03:11:53.013891 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: E0219 03:11:53.013993 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: E0219 03:11:53.014089 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: E0219 03:11:53.014102 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:09.014071623 +0000 UTC m=+435.353756181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:11:53.014222 master-0 kubenswrapper[7776]: E0219 03:11:53.014194 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:12:09.014170826 +0000 UTC m=+435.353855384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:11:53.014806 master-0 kubenswrapper[7776]: I0219 03:11:53.014344 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:11:53.014806 master-0 kubenswrapper[7776]: E0219 03:11:53.014639 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:53.014806 master-0 kubenswrapper[7776]: E0219 03:11:53.014764 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:09.014731552 +0000 UTC m=+435.354416110 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:11:54.635048 master-0 kubenswrapper[7776]: I0219 03:11:54.634972 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:11:54.636066 master-0 kubenswrapper[7776]: E0219 03:11:54.635107 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:11:54.636066 master-0 kubenswrapper[7776]: E0219 03:11:54.635181 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:12:10.635162415 +0000 UTC m=+436.974846933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:11:54.842636 master-0 kubenswrapper[7776]: I0219 03:11:54.842565 7776 scope.go:117] "RemoveContainer" containerID="5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5" Feb 19 03:11:55.354436 master-0 kubenswrapper[7776]: I0219 03:11:55.354243 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/2.log" Feb 19 03:11:55.354909 master-0 kubenswrapper[7776]: I0219 03:11:55.354859 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/1.log" Feb 19 03:11:55.355590 master-0 kubenswrapper[7776]: I0219 03:11:55.355542 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772" exitCode=1 Feb 19 03:11:55.355691 master-0 kubenswrapper[7776]: I0219 03:11:55.355583 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772"} Feb 19 03:11:55.355691 master-0 kubenswrapper[7776]: I0219 03:11:55.355650 7776 scope.go:117] "RemoveContainer" containerID="5d3c1827e01ac74d7081c22775963059041307393361893f4c0261f59d3dedf5" Feb 19 03:11:55.356368 master-0 kubenswrapper[7776]: I0219 03:11:55.356253 7776 scope.go:117] "RemoveContainer" containerID="e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772" Feb 19 03:11:55.356565 master-0 kubenswrapper[7776]: E0219 03:11:55.356494 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:11:56.372515 master-0 kubenswrapper[7776]: I0219 03:11:56.372370 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/2.log" Feb 19 03:11:57.382858 master-0 kubenswrapper[7776]: I0219 03:11:57.382765 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:11:57.382858 master-0 kubenswrapper[7776]: I0219 03:11:57.382839 7776 generic.go:334] "Generic (PLEG): container finished" podID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerID="ac0c6f1221931d6368270f9300d1e7df26e99f211f84672a8bd222a9935f47ac" exitCode=1 Feb 19 03:11:57.384612 master-0 kubenswrapper[7776]: I0219 03:11:57.382931 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4","Type":"ContainerDied","Data":"ac0c6f1221931d6368270f9300d1e7df26e99f211f84672a8bd222a9935f47ac"} Feb 19 03:11:57.389303 master-0 kubenswrapper[7776]: I0219 03:11:57.389230 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:11:57.389303 master-0 kubenswrapper[7776]: I0219 03:11:57.389304 7776 generic.go:334] "Generic (PLEG): container finished" podID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerID="11a1463d7472cc347eeb1e18662a7476d3fc447a3850f542c02f496029d3a5bf" exitCode=1 Feb 19 03:11:57.389718 master-0 kubenswrapper[7776]: I0219 03:11:57.389374 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"66b05aeb-22a8-4008-a582-072f63cc46bf","Type":"ContainerDied","Data":"11a1463d7472cc347eeb1e18662a7476d3fc447a3850f542c02f496029d3a5bf"} Feb 19 03:11:57.392543 master-0 kubenswrapper[7776]: I0219 03:11:57.391645 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:11:57.392543 master-0 kubenswrapper[7776]: I0219 03:11:57.391758 7776 generic.go:334] "Generic (PLEG): container finished" podID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerID="a7cd657859866d0c60a8c29ef7e8c20807d578f39873e49c5149373c208aeee5" exitCode=1 Feb 19 03:11:57.392543 master-0 kubenswrapper[7776]: I0219 03:11:57.391816 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1bddb3a1-41bd-4314-bfb0-3c72ca14200f","Type":"ContainerDied","Data":"a7cd657859866d0c60a8c29ef7e8c20807d578f39873e49c5149373c208aeee5"} Feb 19 03:11:58.760330 master-0 kubenswrapper[7776]: I0219 03:11:58.760250 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:11:58.760935 master-0 kubenswrapper[7776]: I0219 03:11:58.760356 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:11:58.793961 master-0 kubenswrapper[7776]: I0219 03:11:58.793635 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir\") pod \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " Feb 19 03:11:58.793961 master-0 kubenswrapper[7776]: I0219 03:11:58.793812 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1bddb3a1-41bd-4314-bfb0-3c72ca14200f" (UID: "1bddb3a1-41bd-4314-bfb0-3c72ca14200f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.793961 master-0 kubenswrapper[7776]: I0219 03:11:58.793840 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access\") pod \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " Feb 19 03:11:58.793961 master-0 kubenswrapper[7776]: I0219 03:11:58.793966 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock\") pod \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\" (UID: \"1bddb3a1-41bd-4314-bfb0-3c72ca14200f\") " Feb 19 03:11:58.794358 master-0 kubenswrapper[7776]: I0219 03:11:58.794070 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock" (OuterVolumeSpecName: "var-lock") pod "1bddb3a1-41bd-4314-bfb0-3c72ca14200f" (UID: "1bddb3a1-41bd-4314-bfb0-3c72ca14200f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.794749 master-0 kubenswrapper[7776]: I0219 03:11:58.794712 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.794818 master-0 kubenswrapper[7776]: I0219 03:11:58.794769 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.809654 master-0 kubenswrapper[7776]: I0219 03:11:58.809561 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1bddb3a1-41bd-4314-bfb0-3c72ca14200f" (UID: "1bddb3a1-41bd-4314-bfb0-3c72ca14200f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:11:58.843208 master-0 kubenswrapper[7776]: I0219 03:11:58.843155 7776 scope.go:117] "RemoveContainer" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" Feb 19 03:11:58.896360 master-0 kubenswrapper[7776]: I0219 03:11:58.896311 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bddb3a1-41bd-4314-bfb0-3c72ca14200f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.906130 master-0 kubenswrapper[7776]: I0219 03:11:58.906057 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:11:58.906219 master-0 kubenswrapper[7776]: I0219 03:11:58.906174 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:11:58.915376 master-0 kubenswrapper[7776]: I0219 03:11:58.912729 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:11:58.915376 master-0 kubenswrapper[7776]: I0219 03:11:58.912822 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:11:58.997771 master-0 kubenswrapper[7776]: I0219 03:11:58.997598 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access\") pod \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " Feb 19 03:11:58.997771 master-0 kubenswrapper[7776]: I0219 03:11:58.997759 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock\") pod \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " Feb 19 03:11:58.997993 master-0 kubenswrapper[7776]: I0219 03:11:58.997845 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir\") pod \"66b05aeb-22a8-4008-a582-072f63cc46bf\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " Feb 19 03:11:58.997993 master-0 kubenswrapper[7776]: I0219 03:11:58.997891 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock\") pod \"66b05aeb-22a8-4008-a582-072f63cc46bf\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " Feb 19 03:11:58.997993 master-0 kubenswrapper[7776]: I0219 03:11:58.997917 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access\") pod \"66b05aeb-22a8-4008-a582-072f63cc46bf\" (UID: \"66b05aeb-22a8-4008-a582-072f63cc46bf\") " Feb 19 03:11:58.997993 master-0 kubenswrapper[7776]: I0219 03:11:58.997971 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir\") pod \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\" (UID: \"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4\") " Feb 19 03:11:58.998165 master-0 kubenswrapper[7776]: I0219 03:11:58.998036 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "66b05aeb-22a8-4008-a582-072f63cc46bf" (UID: "66b05aeb-22a8-4008-a582-072f63cc46bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.998165 master-0 kubenswrapper[7776]: I0219 03:11:58.998048 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock" (OuterVolumeSpecName: "var-lock") pod "66b05aeb-22a8-4008-a582-072f63cc46bf" (UID: "66b05aeb-22a8-4008-a582-072f63cc46bf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.998165 master-0 kubenswrapper[7776]: I0219 03:11:58.998144 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock" (OuterVolumeSpecName: "var-lock") pod "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" (UID: "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.998332 master-0 kubenswrapper[7776]: I0219 03:11:58.998166 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" (UID: "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:11:58.998733 master-0 kubenswrapper[7776]: I0219 03:11:58.998704 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.998800 master-0 kubenswrapper[7776]: I0219 03:11:58.998733 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.998800 master-0 kubenswrapper[7776]: I0219 03:11:58.998749 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66b05aeb-22a8-4008-a582-072f63cc46bf-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:58.998800 master-0 kubenswrapper[7776]: I0219 03:11:58.998762 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:59.001588 master-0 kubenswrapper[7776]: I0219 03:11:59.001527 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" (UID: "d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:11:59.001857 master-0 kubenswrapper[7776]: I0219 03:11:59.001800 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "66b05aeb-22a8-4008-a582-072f63cc46bf" (UID: "66b05aeb-22a8-4008-a582-072f63cc46bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:11:59.100505 master-0 kubenswrapper[7776]: I0219 03:11:59.100459 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b05aeb-22a8-4008-a582-072f63cc46bf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:59.100751 master-0 kubenswrapper[7776]: I0219 03:11:59.100740 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:11:59.405386 master-0 kubenswrapper[7776]: I0219 03:11:59.405338 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:11:59.405693 master-0 kubenswrapper[7776]: I0219 03:11:59.405441 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4","Type":"ContainerDied","Data":"258078f280458482912939c3338c1981e998a321634b6785079948c05a69b5ce"} Feb 19 03:11:59.405693 master-0 kubenswrapper[7776]: I0219 03:11:59.405469 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="258078f280458482912939c3338c1981e998a321634b6785079948c05a69b5ce" Feb 19 03:11:59.405693 master-0 kubenswrapper[7776]: I0219 03:11:59.405509 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:11:59.407857 master-0 kubenswrapper[7776]: I0219 03:11:59.407803 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:11:59.408066 master-0 kubenswrapper[7776]: I0219 03:11:59.407973 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"66b05aeb-22a8-4008-a582-072f63cc46bf","Type":"ContainerDied","Data":"965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd"} Feb 19 03:11:59.408066 master-0 kubenswrapper[7776]: I0219 03:11:59.408040 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd" Feb 19 03:11:59.408184 master-0 kubenswrapper[7776]: I0219 03:11:59.408061 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:11:59.410275 master-0 kubenswrapper[7776]: I0219 03:11:59.410235 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:11:59.410464 master-0 kubenswrapper[7776]: I0219 03:11:59.410441 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1bddb3a1-41bd-4314-bfb0-3c72ca14200f","Type":"ContainerDied","Data":"676fe9b8803826897eb9069682463435a484f2265769bbfbab612ab166fcad61"} Feb 19 03:11:59.410576 master-0 kubenswrapper[7776]: I0219 03:11:59.410557 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676fe9b8803826897eb9069682463435a484f2265769bbfbab612ab166fcad61" Feb 19 03:11:59.410682 master-0 kubenswrapper[7776]: I0219 03:11:59.410520 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:11:59.413304 master-0 kubenswrapper[7776]: I0219 03:11:59.413234 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:11:59.413304 master-0 kubenswrapper[7776]: I0219 03:11:59.413295 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" event={"ID":"05c9cb4a-5249-4116-a2e5-caa7859e2075","Type":"ContainerStarted","Data":"50aa4a828718a2c8161090508b2a782ad9188b5d56bcad45205b012feb3e3563"} Feb 19 03:12:06.842825 master-0 kubenswrapper[7776]: I0219 03:12:06.842737 7776 scope.go:117] "RemoveContainer" containerID="e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772" Feb 19 03:12:06.843744 master-0 kubenswrapper[7776]: E0219 03:12:06.843049 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:12:08.927613 master-0 kubenswrapper[7776]: I0219 03:12:08.927535 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:12:08.928473 master-0 kubenswrapper[7776]: E0219 03:12:08.927742 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:12:08.928473 master-0 kubenswrapper[7776]: E0219 03:12:08.927839 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:40.927817492 +0000 UTC m=+467.267502020 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:12:09.029007 master-0 kubenswrapper[7776]: I0219 03:12:09.028899 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:12:09.029366 master-0 kubenswrapper[7776]: E0219 03:12:09.029069 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:09.029366 master-0 kubenswrapper[7776]: E0219 03:12:09.029165 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:41.029138719 +0000 UTC m=+467.368823267 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:12:09.029366 master-0 kubenswrapper[7776]: E0219 03:12:09.029190 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:12:09.029366 master-0 kubenswrapper[7776]: I0219 03:12:09.029063 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:12:09.029366 master-0 kubenswrapper[7776]: E0219 03:12:09.029323 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:12:41.029247442 +0000 UTC m=+467.368932000 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:12:09.029820 master-0 kubenswrapper[7776]: I0219 03:12:09.029539 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:12:09.029820 master-0 kubenswrapper[7776]: E0219 03:12:09.029693 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:12:09.029820 master-0 kubenswrapper[7776]: E0219 03:12:09.029766 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:41.029745306 +0000 UTC m=+467.369429854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:12:10.651045 master-0 kubenswrapper[7776]: I0219 03:12:10.650964 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:12:10.651700 master-0 kubenswrapper[7776]: E0219 03:12:10.651349 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:12:10.651700 master-0 kubenswrapper[7776]: E0219 03:12:10.651426 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:12:42.651402294 +0000 UTC m=+468.991086852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:12:18.842973 master-0 kubenswrapper[7776]: I0219 03:12:18.842873 7776 scope.go:117] "RemoveContainer" containerID="e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772" Feb 19 03:12:19.549166 master-0 kubenswrapper[7776]: I0219 03:12:19.549130 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/3.log" Feb 19 03:12:19.549912 master-0 kubenswrapper[7776]: I0219 03:12:19.549888 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/2.log" Feb 19 03:12:19.550951 master-0 kubenswrapper[7776]: I0219 03:12:19.550894 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" exitCode=1 Feb 19 03:12:19.551023 master-0 kubenswrapper[7776]: I0219 03:12:19.550957 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11"} Feb 19 03:12:19.551072 master-0 kubenswrapper[7776]: I0219 03:12:19.551023 7776 scope.go:117] "RemoveContainer" containerID="e61212b37209e14fce7d3564e726376978dfd67ae4437e0380548c4d12109772" Feb 19 03:12:19.551830 master-0 kubenswrapper[7776]: I0219 03:12:19.551807 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:19.552073 master-0 kubenswrapper[7776]: E0219 03:12:19.552038 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:12:20.558879 master-0 kubenswrapper[7776]: I0219 03:12:20.558816 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/3.log" Feb 19 03:12:30.584508 master-0 kubenswrapper[7776]: I0219 03:12:30.584431 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:12:30.585131 master-0 kubenswrapper[7776]: I0219 03:12:30.584763 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h524" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="registry-server" containerID="cri-o://8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87" gracePeriod=2 Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669559 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm"] Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: E0219 03:12:30.669774 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669786 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: E0219 03:12:30.669796 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669802 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: E0219 03:12:30.669817 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669822 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669961 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669973 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.669985 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:12:30.670587 master-0 kubenswrapper[7776]: I0219 03:12:30.670376 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.672965 master-0 kubenswrapper[7776]: I0219 03:12:30.672917 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 03:12:30.697979 master-0 kubenswrapper[7776]: I0219 03:12:30.697918 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm"] Feb 19 03:12:30.725583 master-0 kubenswrapper[7776]: I0219 03:12:30.725319 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.725583 master-0 kubenswrapper[7776]: I0219 03:12:30.725455 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwjp\" (UniqueName: \"kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.725890 master-0 kubenswrapper[7776]: I0219 03:12:30.725735 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.725958 master-0 kubenswrapper[7776]: I0219 03:12:30.725897 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.827049 master-0 kubenswrapper[7776]: I0219 03:12:30.826988 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.827049 master-0 kubenswrapper[7776]: I0219 03:12:30.827044 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwjp\" (UniqueName: \"kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.827327 master-0 kubenswrapper[7776]: I0219 03:12:30.827291 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.827373 master-0 kubenswrapper[7776]: I0219 03:12:30.827343 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.828155 master-0 kubenswrapper[7776]: I0219 03:12:30.828099 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.830414 master-0 kubenswrapper[7776]: I0219 03:12:30.830377 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.830789 master-0 kubenswrapper[7776]: I0219 03:12:30.830760 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.844735 master-0 kubenswrapper[7776]: I0219 03:12:30.844633 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwjp\" (UniqueName: \"kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:30.968399 master-0 kubenswrapper[7776]: I0219 03:12:30.968274 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5t9dd"] Feb 19 03:12:30.970629 master-0 kubenswrapper[7776]: I0219 03:12:30.970379 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:30.973048 master-0 kubenswrapper[7776]: I0219 03:12:30.972984 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mrjgz" Feb 19 03:12:30.979182 master-0 kubenswrapper[7776]: I0219 03:12:30.979149 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:12:30.997048 master-0 kubenswrapper[7776]: I0219 03:12:30.996976 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5t9dd"] Feb 19 03:12:31.021652 master-0 kubenswrapper[7776]: I0219 03:12:31.021604 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:31.029791 master-0 kubenswrapper[7776]: I0219 03:12:31.029736 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l658w\" (UniqueName: \"kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w\") pod \"9789abc0-e82f-4d1a-ba50-faf0075d9139\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " Feb 19 03:12:31.029936 master-0 kubenswrapper[7776]: I0219 03:12:31.029864 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities\") pod \"9789abc0-e82f-4d1a-ba50-faf0075d9139\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " Feb 19 03:12:31.030519 master-0 kubenswrapper[7776]: I0219 03:12:31.029982 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content\") pod \"9789abc0-e82f-4d1a-ba50-faf0075d9139\" (UID: \"9789abc0-e82f-4d1a-ba50-faf0075d9139\") " Feb 19 03:12:31.030519 master-0 kubenswrapper[7776]: I0219 03:12:31.030113 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2btm8\" (UniqueName: \"kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.030519 master-0 kubenswrapper[7776]: I0219 03:12:31.030171 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.030519 master-0 kubenswrapper[7776]: I0219 03:12:31.030325 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.031699 master-0 kubenswrapper[7776]: I0219 03:12:31.031654 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities" (OuterVolumeSpecName: "utilities") pod "9789abc0-e82f-4d1a-ba50-faf0075d9139" (UID: "9789abc0-e82f-4d1a-ba50-faf0075d9139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:31.039442 master-0 kubenswrapper[7776]: I0219 03:12:31.039388 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w" (OuterVolumeSpecName: "kube-api-access-l658w") pod "9789abc0-e82f-4d1a-ba50-faf0075d9139" (UID: "9789abc0-e82f-4d1a-ba50-faf0075d9139"). InnerVolumeSpecName "kube-api-access-l658w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:31.085302 master-0 kubenswrapper[7776]: I0219 03:12:31.084765 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9789abc0-e82f-4d1a-ba50-faf0075d9139" (UID: "9789abc0-e82f-4d1a-ba50-faf0075d9139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:31.131315 master-0 kubenswrapper[7776]: I0219 03:12:31.131218 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2btm8\" (UniqueName: \"kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.131315 master-0 kubenswrapper[7776]: I0219 03:12:31.131296 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.131596 master-0 kubenswrapper[7776]: I0219 03:12:31.131363 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.131596 master-0 kubenswrapper[7776]: I0219 03:12:31.131399 7776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.131596 master-0 kubenswrapper[7776]: I0219 03:12:31.131412 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l658w\" (UniqueName: \"kubernetes.io/projected/9789abc0-e82f-4d1a-ba50-faf0075d9139-kube-api-access-l658w\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.131596 master-0 kubenswrapper[7776]: I0219 03:12:31.131423 7776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9789abc0-e82f-4d1a-ba50-faf0075d9139-utilities\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.131843 master-0 kubenswrapper[7776]: I0219 03:12:31.131820 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.132121 master-0 kubenswrapper[7776]: I0219 03:12:31.132080 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.148729 master-0 kubenswrapper[7776]: I0219 03:12:31.148564 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2btm8\" (UniqueName: \"kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.158774 master-0 kubenswrapper[7776]: I0219 03:12:31.158598 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:12:31.159596 master-0 kubenswrapper[7776]: I0219 03:12:31.159323 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2cczk" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="registry-server" containerID="cri-o://f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb" gracePeriod=2 Feb 19 03:12:31.302814 master-0 kubenswrapper[7776]: I0219 03:12:31.302748 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:31.422303 master-0 kubenswrapper[7776]: I0219 03:12:31.416716 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm"] Feb 19 03:12:31.427107 master-0 kubenswrapper[7776]: W0219 03:12:31.427057 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2576028c_40d8_4ef4_ba41_a5aff01f2ed3.slice/crio-2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff WatchSource:0}: Error finding container 2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff: Status 404 returned error can't find the container with id 2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff Feb 19 03:12:31.545750 master-0 kubenswrapper[7776]: I0219 03:12:31.545668 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568218 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrcnx"] Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568714 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568730 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568745 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="extract-utilities" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568753 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="extract-utilities" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568769 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="extract-content" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568777 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="extract-content" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568786 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568794 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568816 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="extract-utilities" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568824 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="extract-utilities" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: E0219 03:12:31.568838 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="extract-content" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568846 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="extract-content" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568966 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.568981 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerName="registry-server" Feb 19 03:12:31.571475 master-0 kubenswrapper[7776]: I0219 03:12:31.569747 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.573092 master-0 kubenswrapper[7776]: I0219 03:12:31.572599 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7rwgg" Feb 19 03:12:31.578820 master-0 kubenswrapper[7776]: I0219 03:12:31.578776 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrcnx"] Feb 19 03:12:31.630674 master-0 kubenswrapper[7776]: I0219 03:12:31.630619 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" event={"ID":"2576028c-40d8-4ef4-ba41-a5aff01f2ed3","Type":"ContainerStarted","Data":"6d905eba4c2c28c4c4ded12383dc44f73f235005223f7d4b1b3c002fda40d944"} Feb 19 03:12:31.630674 master-0 kubenswrapper[7776]: I0219 03:12:31.630671 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" event={"ID":"2576028c-40d8-4ef4-ba41-a5aff01f2ed3","Type":"ContainerStarted","Data":"2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff"} Feb 19 03:12:31.631292 master-0 kubenswrapper[7776]: I0219 03:12:31.631019 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:31.632473 master-0 kubenswrapper[7776]: I0219 03:12:31.632437 7776 generic.go:334] "Generic (PLEG): container finished" podID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" containerID="f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb" exitCode=0 Feb 19 03:12:31.632529 master-0 kubenswrapper[7776]: I0219 03:12:31.632491 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerDied","Data":"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb"} Feb 19 03:12:31.632529 master-0 kubenswrapper[7776]: I0219 03:12:31.632514 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cczk" event={"ID":"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0","Type":"ContainerDied","Data":"739328e3c80f5d92997dce3955ee26103a58c696507370455c7f3d7bb7efb16c"} Feb 19 03:12:31.632592 master-0 kubenswrapper[7776]: I0219 03:12:31.632531 7776 scope.go:117] "RemoveContainer" containerID="f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb" Feb 19 03:12:31.632677 master-0 kubenswrapper[7776]: I0219 03:12:31.632647 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cczk" Feb 19 03:12:31.635605 master-0 kubenswrapper[7776]: I0219 03:12:31.635543 7776 generic.go:334] "Generic (PLEG): container finished" podID="9789abc0-e82f-4d1a-ba50-faf0075d9139" containerID="8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87" exitCode=0 Feb 19 03:12:31.635605 master-0 kubenswrapper[7776]: I0219 03:12:31.635586 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h524" Feb 19 03:12:31.635866 master-0 kubenswrapper[7776]: I0219 03:12:31.635589 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerDied","Data":"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87"} Feb 19 03:12:31.635866 master-0 kubenswrapper[7776]: I0219 03:12:31.635658 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h524" event={"ID":"9789abc0-e82f-4d1a-ba50-faf0075d9139","Type":"ContainerDied","Data":"1512a730540d9efdc942e8b16c196674f3900de559f11753505a3ff018b1af97"} Feb 19 03:12:31.639772 master-0 kubenswrapper[7776]: I0219 03:12:31.639647 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhmkz\" (UniqueName: \"kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz\") pod \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " Feb 19 03:12:31.639772 master-0 kubenswrapper[7776]: I0219 03:12:31.639693 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content\") pod \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " Feb 19 03:12:31.639772 master-0 kubenswrapper[7776]: I0219 03:12:31.639733 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities\") pod \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\" (UID: \"30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0\") " Feb 19 03:12:31.639958 master-0 kubenswrapper[7776]: I0219 03:12:31.639882 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.639958 master-0 kubenswrapper[7776]: I0219 03:12:31.639926 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.640106 master-0 kubenswrapper[7776]: I0219 03:12:31.639975 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2vc\" (UniqueName: \"kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.643449 master-0 kubenswrapper[7776]: I0219 03:12:31.643409 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz" (OuterVolumeSpecName: "kube-api-access-vhmkz") pod "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" (UID: "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0"). InnerVolumeSpecName "kube-api-access-vhmkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:31.644040 master-0 kubenswrapper[7776]: I0219 03:12:31.643989 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities" (OuterVolumeSpecName: "utilities") pod "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" (UID: "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:31.653924 master-0 kubenswrapper[7776]: I0219 03:12:31.653567 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" podStartSLOduration=1.653544795 podStartE2EDuration="1.653544795s" podCreationTimestamp="2026-02-19 03:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:12:31.652944788 +0000 UTC m=+457.992629316" watchObservedRunningTime="2026-02-19 03:12:31.653544795 +0000 UTC m=+457.993229313" Feb 19 03:12:31.656190 master-0 kubenswrapper[7776]: I0219 03:12:31.656153 7776 scope.go:117] "RemoveContainer" containerID="e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570" Feb 19 03:12:31.689940 master-0 kubenswrapper[7776]: I0219 03:12:31.689777 7776 scope.go:117] "RemoveContainer" containerID="3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2" Feb 19 03:12:31.691886 master-0 kubenswrapper[7776]: I0219 03:12:31.691018 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:12:31.700730 master-0 kubenswrapper[7776]: I0219 03:12:31.700676 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h524"] Feb 19 03:12:31.709499 master-0 kubenswrapper[7776]: I0219 03:12:31.709443 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" (UID: "30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:31.728281 master-0 kubenswrapper[7776]: W0219 03:12:31.728196 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca82f2e9_884e_49d1_9863_e87212d01edc.slice/crio-31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6 WatchSource:0}: Error finding container 31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6: Status 404 returned error can't find the container with id 31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6 Feb 19 03:12:31.728703 master-0 kubenswrapper[7776]: I0219 03:12:31.728658 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5t9dd"] Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: I0219 03:12:31.738111 7776 scope.go:117] "RemoveContainer" containerID="f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: E0219 03:12:31.738457 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb\": container with ID starting with f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb not found: ID does not exist" containerID="f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: I0219 03:12:31.738484 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb"} err="failed to get container status \"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb\": rpc error: code = NotFound desc = could not find container \"f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb\": container with ID starting with f393dafec5a2598878299bf8440520944f593cd670dab0f0d731dcb9440317eb not found: ID does not exist" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: I0219 03:12:31.738504 7776 scope.go:117] "RemoveContainer" containerID="e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: E0219 03:12:31.739059 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570\": container with ID starting with e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570 not found: ID does not exist" containerID="e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: I0219 03:12:31.739107 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570"} err="failed to get container status \"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570\": rpc error: code = NotFound desc = could not find container \"e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570\": container with ID starting with e7c585b3bf436877ff2506f4d96fa6745a77b8972e65d0d72469bd8d00f64570 not found: ID does not exist" Feb 19 03:12:31.739437 master-0 kubenswrapper[7776]: I0219 03:12:31.739140 7776 scope.go:117] "RemoveContainer" containerID="3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2" Feb 19 03:12:31.740015 master-0 kubenswrapper[7776]: E0219 03:12:31.739818 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2\": container with ID starting with 3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2 not found: ID does not exist" containerID="3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2" Feb 19 03:12:31.740015 master-0 kubenswrapper[7776]: I0219 03:12:31.739849 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2"} err="failed to get container status \"3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2\": rpc error: code = NotFound desc = could not find container \"3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2\": container with ID starting with 3af9e816fd774706696bcf079d7a8a989e71d0d5df845d35e7bd6905eb4f3ec2 not found: ID does not exist" Feb 19 03:12:31.740015 master-0 kubenswrapper[7776]: I0219 03:12:31.739873 7776 scope.go:117] "RemoveContainer" containerID="8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741143 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741201 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741294 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2vc\" (UniqueName: \"kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741407 7776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741423 7776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-utilities\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.741610 master-0 kubenswrapper[7776]: I0219 03:12:31.741437 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhmkz\" (UniqueName: \"kubernetes.io/projected/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0-kube-api-access-vhmkz\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:31.742606 master-0 kubenswrapper[7776]: I0219 03:12:31.741915 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.742606 master-0 kubenswrapper[7776]: I0219 03:12:31.742224 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.761399 master-0 kubenswrapper[7776]: I0219 03:12:31.761331 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2vc\" (UniqueName: \"kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.826275 master-0 kubenswrapper[7776]: I0219 03:12:31.826173 7776 scope.go:117] "RemoveContainer" containerID="6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f" Feb 19 03:12:31.854462 master-0 kubenswrapper[7776]: I0219 03:12:31.854404 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9789abc0-e82f-4d1a-ba50-faf0075d9139" path="/var/lib/kubelet/pods/9789abc0-e82f-4d1a-ba50-faf0075d9139/volumes" Feb 19 03:12:31.855959 master-0 kubenswrapper[7776]: I0219 03:12:31.855284 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j2wxd"] Feb 19 03:12:31.861689 master-0 kubenswrapper[7776]: I0219 03:12:31.859343 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:31.867163 master-0 kubenswrapper[7776]: I0219 03:12:31.862043 7776 scope.go:117] "RemoveContainer" containerID="f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa" Feb 19 03:12:31.867163 master-0 kubenswrapper[7776]: I0219 03:12:31.864367 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 03:12:31.892879 master-0 kubenswrapper[7776]: I0219 03:12:31.892739 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:31.903524 master-0 kubenswrapper[7776]: I0219 03:12:31.903488 7776 scope.go:117] "RemoveContainer" containerID="8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87" Feb 19 03:12:31.904173 master-0 kubenswrapper[7776]: E0219 03:12:31.904096 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87\": container with ID starting with 8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87 not found: ID does not exist" containerID="8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87" Feb 19 03:12:31.904242 master-0 kubenswrapper[7776]: I0219 03:12:31.904197 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87"} err="failed to get container status \"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87\": rpc error: code = NotFound desc = could not find container \"8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87\": container with ID starting with 8e1521b4b80fac2a4c624f0d2168204309e6a444e36074f2117fc1e1994caa87 not found: ID does not exist" Feb 19 03:12:31.904306 master-0 kubenswrapper[7776]: I0219 03:12:31.904277 7776 scope.go:117] "RemoveContainer" containerID="6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f" Feb 19 03:12:31.904739 master-0 kubenswrapper[7776]: E0219 03:12:31.904709 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f\": container with ID starting with 6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f not found: ID does not exist" containerID="6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f" Feb 19 03:12:31.904796 master-0 kubenswrapper[7776]: I0219 03:12:31.904750 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f"} err="failed to get container status \"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f\": rpc error: code = NotFound desc = could not find container \"6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f\": container with ID starting with 6c5f1d203badc672bf2d060c9a59508acf9d8e52f5b4bb20caf8f945a941566f not found: ID does not exist" Feb 19 03:12:31.904796 master-0 kubenswrapper[7776]: I0219 03:12:31.904777 7776 scope.go:117] "RemoveContainer" containerID="f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa" Feb 19 03:12:31.905003 master-0 kubenswrapper[7776]: E0219 03:12:31.904984 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa\": container with ID starting with f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa not found: ID does not exist" containerID="f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa" Feb 19 03:12:31.905035 master-0 kubenswrapper[7776]: I0219 03:12:31.905007 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa"} err="failed to get container status \"f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa\": rpc error: code = NotFound desc = could not find container \"f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa\": container with ID starting with f903a9bf3809c42320f284866ae2d6c7a4383a86032dfc5b429c3f33f97b1cfa not found: ID does not exist" Feb 19 03:12:31.943652 master-0 kubenswrapper[7776]: I0219 03:12:31.943576 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:31.943960 master-0 kubenswrapper[7776]: I0219 03:12:31.943759 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clddw\" (UniqueName: \"kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:31.943960 master-0 kubenswrapper[7776]: I0219 03:12:31.943793 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:31.943960 master-0 kubenswrapper[7776]: I0219 03:12:31.943820 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.005822 master-0 kubenswrapper[7776]: I0219 03:12:32.005342 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:12:32.007989 master-0 kubenswrapper[7776]: I0219 03:12:32.007914 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2cczk"] Feb 19 03:12:32.045376 master-0 kubenswrapper[7776]: I0219 03:12:32.045316 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.046567 master-0 kubenswrapper[7776]: I0219 03:12:32.045731 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clddw\" (UniqueName: \"kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.046567 master-0 kubenswrapper[7776]: I0219 03:12:32.045796 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.046567 master-0 kubenswrapper[7776]: I0219 03:12:32.045850 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.046567 master-0 kubenswrapper[7776]: I0219 03:12:32.045944 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.046567 master-0 kubenswrapper[7776]: I0219 03:12:32.046272 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.049819 master-0 kubenswrapper[7776]: I0219 03:12:32.049688 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.071332 master-0 kubenswrapper[7776]: I0219 03:12:32.069292 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clddw\" (UniqueName: \"kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.145890 master-0 kubenswrapper[7776]: I0219 03:12:32.145781 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:12:32.219574 master-0 kubenswrapper[7776]: I0219 03:12:32.219510 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:12:32.244575 master-0 kubenswrapper[7776]: W0219 03:12:32.244536 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b137033_0db2_46c9_a526_f8234345e883.slice/crio-5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48 WatchSource:0}: Error finding container 5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48: Status 404 returned error can't find the container with id 5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48 Feb 19 03:12:32.334947 master-0 kubenswrapper[7776]: I0219 03:12:32.334887 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrcnx"] Feb 19 03:12:32.645401 master-0 kubenswrapper[7776]: I0219 03:12:32.645347 7776 generic.go:334] "Generic (PLEG): container finished" podID="dabc3c9b-ed58-4fd4-8735-65d504fa299a" containerID="11e063f31f05dce30b3ceadd89b21b5514f82e1cb9cd2eef54bba9d4c7adf163" exitCode=0 Feb 19 03:12:32.646296 master-0 kubenswrapper[7776]: I0219 03:12:32.645404 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrcnx" event={"ID":"dabc3c9b-ed58-4fd4-8735-65d504fa299a","Type":"ContainerDied","Data":"11e063f31f05dce30b3ceadd89b21b5514f82e1cb9cd2eef54bba9d4c7adf163"} Feb 19 03:12:32.646296 master-0 kubenswrapper[7776]: I0219 03:12:32.645479 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrcnx" event={"ID":"dabc3c9b-ed58-4fd4-8735-65d504fa299a","Type":"ContainerStarted","Data":"c20f637b2a13dfb247a3370a860f01309bff13bd9c879b2139d436b648ea6361"} Feb 19 03:12:32.647775 master-0 kubenswrapper[7776]: I0219 03:12:32.647720 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" event={"ID":"7b137033-0db2-46c9-a526-f8234345e883","Type":"ContainerStarted","Data":"3a2bd54be254e2dfb722c762774d77a39dbb203b4a04fcb2b2cbbd84248ff228"} Feb 19 03:12:32.647775 master-0 kubenswrapper[7776]: I0219 03:12:32.647762 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" event={"ID":"7b137033-0db2-46c9-a526-f8234345e883","Type":"ContainerStarted","Data":"a035f8a199f66c3eefdfeb2e0dd1c4cc5afc90f694606432ed379b6d4fbcacff"} Feb 19 03:12:32.647775 master-0 kubenswrapper[7776]: I0219 03:12:32.647774 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" event={"ID":"7b137033-0db2-46c9-a526-f8234345e883","Type":"ContainerStarted","Data":"5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48"} Feb 19 03:12:32.652541 master-0 kubenswrapper[7776]: I0219 03:12:32.652327 7776 generic.go:334] "Generic (PLEG): container finished" podID="ca82f2e9-884e-49d1-9863-e87212d01edc" containerID="47a9a4e021740b3522fd1067cdf04d17a49d5aecb4e553dbb6033c10cc4cadea" exitCode=0 Feb 19 03:12:32.652541 master-0 kubenswrapper[7776]: I0219 03:12:32.652363 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t9dd" event={"ID":"ca82f2e9-884e-49d1-9863-e87212d01edc","Type":"ContainerDied","Data":"47a9a4e021740b3522fd1067cdf04d17a49d5aecb4e553dbb6033c10cc4cadea"} Feb 19 03:12:32.652541 master-0 kubenswrapper[7776]: I0219 03:12:32.652398 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t9dd" event={"ID":"ca82f2e9-884e-49d1-9863-e87212d01edc","Type":"ContainerStarted","Data":"31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6"} Feb 19 03:12:32.689551 master-0 kubenswrapper[7776]: I0219 03:12:32.689476 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" podStartSLOduration=1.68945604 podStartE2EDuration="1.68945604s" podCreationTimestamp="2026-02-19 03:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:12:32.687772081 +0000 UTC m=+459.027456609" watchObservedRunningTime="2026-02-19 03:12:32.68945604 +0000 UTC m=+459.029140558" Feb 19 03:12:33.366575 master-0 kubenswrapper[7776]: I0219 03:12:33.366529 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:12:33.366874 master-0 kubenswrapper[7776]: I0219 03:12:33.366843 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lwt4t" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="registry-server" containerID="cri-o://9dbf591511f3015176a71f524f75ea44459f2fd46e24cc64eacbcb84c285b728" gracePeriod=2 Feb 19 03:12:33.665267 master-0 kubenswrapper[7776]: I0219 03:12:33.665212 7776 generic.go:334] "Generic (PLEG): container finished" podID="ca82f2e9-884e-49d1-9863-e87212d01edc" containerID="884fb08aaaf4688bc340b7a7dc22d08a23af01fd1a5c49b78e0797dec6266347" exitCode=0 Feb 19 03:12:33.665850 master-0 kubenswrapper[7776]: I0219 03:12:33.665297 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t9dd" event={"ID":"ca82f2e9-884e-49d1-9863-e87212d01edc","Type":"ContainerDied","Data":"884fb08aaaf4688bc340b7a7dc22d08a23af01fd1a5c49b78e0797dec6266347"} Feb 19 03:12:33.669338 master-0 kubenswrapper[7776]: I0219 03:12:33.669300 7776 generic.go:334] "Generic (PLEG): container finished" podID="dabc3c9b-ed58-4fd4-8735-65d504fa299a" containerID="250455e2350c62e9673222f5b8f6250c1b8079eede15297818337eff7b21a5a3" exitCode=0 Feb 19 03:12:33.669428 master-0 kubenswrapper[7776]: I0219 03:12:33.669379 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrcnx" event={"ID":"dabc3c9b-ed58-4fd4-8735-65d504fa299a","Type":"ContainerDied","Data":"250455e2350c62e9673222f5b8f6250c1b8079eede15297818337eff7b21a5a3"} Feb 19 03:12:33.674199 master-0 kubenswrapper[7776]: I0219 03:12:33.674136 7776 generic.go:334] "Generic (PLEG): container finished" podID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerID="9dbf591511f3015176a71f524f75ea44459f2fd46e24cc64eacbcb84c285b728" exitCode=0 Feb 19 03:12:33.674324 master-0 kubenswrapper[7776]: I0219 03:12:33.674265 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerDied","Data":"9dbf591511f3015176a71f524f75ea44459f2fd46e24cc64eacbcb84c285b728"} Feb 19 03:12:33.769195 master-0 kubenswrapper[7776]: I0219 03:12:33.769088 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nqnbc"] Feb 19 03:12:33.771945 master-0 kubenswrapper[7776]: I0219 03:12:33.771799 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:33.774001 master-0 kubenswrapper[7776]: I0219 03:12:33.773944 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-5msgd" Feb 19 03:12:33.783470 master-0 kubenswrapper[7776]: I0219 03:12:33.783420 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:12:33.786065 master-0 kubenswrapper[7776]: I0219 03:12:33.786023 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqnbc"] Feb 19 03:12:33.868720 master-0 kubenswrapper[7776]: I0219 03:12:33.863039 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0" path="/var/lib/kubelet/pods/30bc418a-e8f3-4c48-8ac7-f3645ec5a0e0/volumes" Feb 19 03:12:33.960203 master-0 kubenswrapper[7776]: I0219 03:12:33.960130 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:12:33.960522 master-0 kubenswrapper[7776]: I0219 03:12:33.960416 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spsn7" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="registry-server" containerID="cri-o://d78e62e78b262908533db4b07e0adc537376985d3006aaed9e0ce93af55f76bd" gracePeriod=2 Feb 19 03:12:33.978126 master-0 kubenswrapper[7776]: I0219 03:12:33.977890 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities\") pod \"76050135-a8a1-4968-9a00-2d251c17f8b8\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " Feb 19 03:12:33.978126 master-0 kubenswrapper[7776]: I0219 03:12:33.977983 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t78l7\" (UniqueName: \"kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7\") pod \"76050135-a8a1-4968-9a00-2d251c17f8b8\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " Feb 19 03:12:33.978126 master-0 kubenswrapper[7776]: I0219 03:12:33.978016 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content\") pod \"76050135-a8a1-4968-9a00-2d251c17f8b8\" (UID: \"76050135-a8a1-4968-9a00-2d251c17f8b8\") " Feb 19 03:12:33.978565 master-0 kubenswrapper[7776]: I0219 03:12:33.978195 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmbc\" (UniqueName: \"kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:33.978565 master-0 kubenswrapper[7776]: I0219 03:12:33.978237 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:33.978565 master-0 kubenswrapper[7776]: I0219 03:12:33.978323 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:33.979193 master-0 kubenswrapper[7776]: I0219 03:12:33.979154 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities" (OuterVolumeSpecName: "utilities") pod "76050135-a8a1-4968-9a00-2d251c17f8b8" (UID: "76050135-a8a1-4968-9a00-2d251c17f8b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:33.982279 master-0 kubenswrapper[7776]: I0219 03:12:33.982138 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7" (OuterVolumeSpecName: "kube-api-access-t78l7") pod "76050135-a8a1-4968-9a00-2d251c17f8b8" (UID: "76050135-a8a1-4968-9a00-2d251c17f8b8"). InnerVolumeSpecName "kube-api-access-t78l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:34.007573 master-0 kubenswrapper[7776]: I0219 03:12:34.007465 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76050135-a8a1-4968-9a00-2d251c17f8b8" (UID: "76050135-a8a1-4968-9a00-2d251c17f8b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:34.079502 master-0 kubenswrapper[7776]: I0219 03:12:34.079179 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmbc\" (UniqueName: \"kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.079502 master-0 kubenswrapper[7776]: I0219 03:12:34.079238 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.079670 master-0 kubenswrapper[7776]: I0219 03:12:34.079535 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.079670 master-0 kubenswrapper[7776]: I0219 03:12:34.079639 7776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-utilities\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:34.079670 master-0 kubenswrapper[7776]: I0219 03:12:34.079655 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t78l7\" (UniqueName: \"kubernetes.io/projected/76050135-a8a1-4968-9a00-2d251c17f8b8-kube-api-access-t78l7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:34.079670 master-0 kubenswrapper[7776]: I0219 03:12:34.079665 7776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76050135-a8a1-4968-9a00-2d251c17f8b8-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:34.080101 master-0 kubenswrapper[7776]: I0219 03:12:34.080029 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.080101 master-0 kubenswrapper[7776]: I0219 03:12:34.080095 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.104866 master-0 kubenswrapper[7776]: I0219 03:12:34.104801 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmbc\" (UniqueName: \"kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.123911 master-0 kubenswrapper[7776]: I0219 03:12:34.123855 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:34.376381 master-0 kubenswrapper[7776]: I0219 03:12:34.376336 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9c2b"] Feb 19 03:12:34.376562 master-0 kubenswrapper[7776]: E0219 03:12:34.376543 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="registry-server" Feb 19 03:12:34.376562 master-0 kubenswrapper[7776]: I0219 03:12:34.376558 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="registry-server" Feb 19 03:12:34.376634 master-0 kubenswrapper[7776]: E0219 03:12:34.376570 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="extract-content" Feb 19 03:12:34.376634 master-0 kubenswrapper[7776]: I0219 03:12:34.376576 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="extract-content" Feb 19 03:12:34.376634 master-0 kubenswrapper[7776]: E0219 03:12:34.376591 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="extract-utilities" Feb 19 03:12:34.376634 master-0 kubenswrapper[7776]: I0219 03:12:34.376597 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="extract-utilities" Feb 19 03:12:34.376739 master-0 kubenswrapper[7776]: I0219 03:12:34.376696 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" containerName="registry-server" Feb 19 03:12:34.378588 master-0 kubenswrapper[7776]: I0219 03:12:34.378539 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.384694 master-0 kubenswrapper[7776]: I0219 03:12:34.384657 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-g7dwh" Feb 19 03:12:34.385226 master-0 kubenswrapper[7776]: I0219 03:12:34.385183 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd6c\" (UniqueName: \"kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.385324 master-0 kubenswrapper[7776]: I0219 03:12:34.385273 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.385324 master-0 kubenswrapper[7776]: I0219 03:12:34.385305 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.388386 master-0 kubenswrapper[7776]: I0219 03:12:34.388338 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9c2b"] Feb 19 03:12:34.486411 master-0 kubenswrapper[7776]: I0219 03:12:34.486354 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.486582 master-0 kubenswrapper[7776]: I0219 03:12:34.486512 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.486671 master-0 kubenswrapper[7776]: I0219 03:12:34.486645 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd6c\" (UniqueName: \"kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.486950 master-0 kubenswrapper[7776]: I0219 03:12:34.486911 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.487277 master-0 kubenswrapper[7776]: I0219 03:12:34.487230 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.503038 master-0 kubenswrapper[7776]: I0219 03:12:34.502983 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd6c\" (UniqueName: \"kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.529046 master-0 kubenswrapper[7776]: I0219 03:12:34.527499 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqnbc"] Feb 19 03:12:34.544920 master-0 kubenswrapper[7776]: W0219 03:12:34.544866 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod546cf649_8e0d_4c8a_a197_412db42e36b6.slice/crio-f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd WatchSource:0}: Error finding container f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd: Status 404 returned error can't find the container with id f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd Feb 19 03:12:34.685684 master-0 kubenswrapper[7776]: I0219 03:12:34.685629 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t9dd" event={"ID":"ca82f2e9-884e-49d1-9863-e87212d01edc","Type":"ContainerStarted","Data":"bbd4d61e3d87a47842e436ba8fddb31458f2257d5e39e5615f70593ef08fd794"} Feb 19 03:12:34.687558 master-0 kubenswrapper[7776]: I0219 03:12:34.687507 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqnbc" event={"ID":"546cf649-8e0d-4c8a-a197-412db42e36b6","Type":"ContainerStarted","Data":"f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd"} Feb 19 03:12:34.691625 master-0 kubenswrapper[7776]: I0219 03:12:34.691587 7776 generic.go:334] "Generic (PLEG): container finished" podID="543aef8d-960a-42c9-b1fd-954e2d024002" containerID="d78e62e78b262908533db4b07e0adc537376985d3006aaed9e0ce93af55f76bd" exitCode=0 Feb 19 03:12:34.691706 master-0 kubenswrapper[7776]: I0219 03:12:34.691641 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerDied","Data":"d78e62e78b262908533db4b07e0adc537376985d3006aaed9e0ce93af55f76bd"} Feb 19 03:12:34.694505 master-0 kubenswrapper[7776]: I0219 03:12:34.694446 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrcnx" event={"ID":"dabc3c9b-ed58-4fd4-8735-65d504fa299a","Type":"ContainerStarted","Data":"a150e510c99a10539c2b857f13a75869e637a9e41a6d41508033541e07267140"} Feb 19 03:12:34.696985 master-0 kubenswrapper[7776]: I0219 03:12:34.696935 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwt4t" event={"ID":"76050135-a8a1-4968-9a00-2d251c17f8b8","Type":"ContainerDied","Data":"bc89aa84bf98aec00234431ef6e9f2fe11646021dc54fa12055d336972870e19"} Feb 19 03:12:34.697054 master-0 kubenswrapper[7776]: I0219 03:12:34.696994 7776 scope.go:117] "RemoveContainer" containerID="9dbf591511f3015176a71f524f75ea44459f2fd46e24cc64eacbcb84c285b728" Feb 19 03:12:34.697169 master-0 kubenswrapper[7776]: I0219 03:12:34.697141 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwt4t" Feb 19 03:12:34.710220 master-0 kubenswrapper[7776]: I0219 03:12:34.710132 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5t9dd" podStartSLOduration=3.293910567 podStartE2EDuration="4.710109012s" podCreationTimestamp="2026-02-19 03:12:30 +0000 UTC" firstStartedPulling="2026-02-19 03:12:32.653591807 +0000 UTC m=+458.993276325" lastFinishedPulling="2026-02-19 03:12:34.069790252 +0000 UTC m=+460.409474770" observedRunningTime="2026-02-19 03:12:34.706150156 +0000 UTC m=+461.045834674" watchObservedRunningTime="2026-02-19 03:12:34.710109012 +0000 UTC m=+461.049793530" Feb 19 03:12:34.715020 master-0 kubenswrapper[7776]: I0219 03:12:34.714962 7776 scope.go:117] "RemoveContainer" containerID="9e1f925dcef405e11a0cd39d3f095f51ec32450e7b276a65d93d2396c9594fa0" Feb 19 03:12:34.732392 master-0 kubenswrapper[7776]: I0219 03:12:34.732286 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrcnx" podStartSLOduration=2.301164648 podStartE2EDuration="3.732246765s" podCreationTimestamp="2026-02-19 03:12:31 +0000 UTC" firstStartedPulling="2026-02-19 03:12:32.649297163 +0000 UTC m=+458.988981711" lastFinishedPulling="2026-02-19 03:12:34.08037931 +0000 UTC m=+460.420063828" observedRunningTime="2026-02-19 03:12:34.729126595 +0000 UTC m=+461.068811133" watchObservedRunningTime="2026-02-19 03:12:34.732246765 +0000 UTC m=+461.071931283" Feb 19 03:12:34.746103 master-0 kubenswrapper[7776]: I0219 03:12:34.746043 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:12:34.752370 master-0 kubenswrapper[7776]: I0219 03:12:34.752170 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwt4t"] Feb 19 03:12:34.775615 master-0 kubenswrapper[7776]: I0219 03:12:34.769561 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:34.829201 master-0 kubenswrapper[7776]: I0219 03:12:34.829160 7776 scope.go:117] "RemoveContainer" containerID="47f7612f33e0cf94efc85de328887bb9cd61c80cca5e28cf021feca142ca3510" Feb 19 03:12:34.836801 master-0 kubenswrapper[7776]: I0219 03:12:34.836768 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:12:34.843130 master-0 kubenswrapper[7776]: I0219 03:12:34.843100 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:34.843366 master-0 kubenswrapper[7776]: E0219 03:12:34.843336 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:12:34.994380 master-0 kubenswrapper[7776]: I0219 03:12:34.994152 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwxjf\" (UniqueName: \"kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf\") pod \"543aef8d-960a-42c9-b1fd-954e2d024002\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " Feb 19 03:12:34.994380 master-0 kubenswrapper[7776]: I0219 03:12:34.994311 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content\") pod \"543aef8d-960a-42c9-b1fd-954e2d024002\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " Feb 19 03:12:34.994622 master-0 kubenswrapper[7776]: I0219 03:12:34.994435 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities\") pod \"543aef8d-960a-42c9-b1fd-954e2d024002\" (UID: \"543aef8d-960a-42c9-b1fd-954e2d024002\") " Feb 19 03:12:34.995201 master-0 kubenswrapper[7776]: I0219 03:12:34.995146 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities" (OuterVolumeSpecName: "utilities") pod "543aef8d-960a-42c9-b1fd-954e2d024002" (UID: "543aef8d-960a-42c9-b1fd-954e2d024002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:35.000687 master-0 kubenswrapper[7776]: I0219 03:12:35.000599 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf" (OuterVolumeSpecName: "kube-api-access-lwxjf") pod "543aef8d-960a-42c9-b1fd-954e2d024002" (UID: "543aef8d-960a-42c9-b1fd-954e2d024002"). InnerVolumeSpecName "kube-api-access-lwxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:35.096586 master-0 kubenswrapper[7776]: I0219 03:12:35.096497 7776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-utilities\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:35.096586 master-0 kubenswrapper[7776]: I0219 03:12:35.096564 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwxjf\" (UniqueName: \"kubernetes.io/projected/543aef8d-960a-42c9-b1fd-954e2d024002-kube-api-access-lwxjf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:35.135049 master-0 kubenswrapper[7776]: I0219 03:12:35.134993 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "543aef8d-960a-42c9-b1fd-954e2d024002" (UID: "543aef8d-960a-42c9-b1fd-954e2d024002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:12:35.181333 master-0 kubenswrapper[7776]: I0219 03:12:35.181211 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9c2b"] Feb 19 03:12:35.198028 master-0 kubenswrapper[7776]: I0219 03:12:35.197984 7776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543aef8d-960a-42c9-b1fd-954e2d024002-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:35.706214 master-0 kubenswrapper[7776]: I0219 03:12:35.706126 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spsn7" event={"ID":"543aef8d-960a-42c9-b1fd-954e2d024002","Type":"ContainerDied","Data":"b502d1e6d3dfc70af9bc93fe4e3abd4f51e92d96f25b5329cdda631631649d28"} Feb 19 03:12:35.706214 master-0 kubenswrapper[7776]: I0219 03:12:35.706203 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spsn7" Feb 19 03:12:35.706712 master-0 kubenswrapper[7776]: I0219 03:12:35.706240 7776 scope.go:117] "RemoveContainer" containerID="d78e62e78b262908533db4b07e0adc537376985d3006aaed9e0ce93af55f76bd" Feb 19 03:12:35.709556 master-0 kubenswrapper[7776]: I0219 03:12:35.709509 7776 generic.go:334] "Generic (PLEG): container finished" podID="76529f4c-70b1-4fcb-ba48-ae929228f9fc" containerID="908193b1182061490b900a4344890d721c956eb5ad5ebbda4500fde13ae2779d" exitCode=0 Feb 19 03:12:35.709695 master-0 kubenswrapper[7776]: I0219 03:12:35.709658 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c2b" event={"ID":"76529f4c-70b1-4fcb-ba48-ae929228f9fc","Type":"ContainerDied","Data":"908193b1182061490b900a4344890d721c956eb5ad5ebbda4500fde13ae2779d"} Feb 19 03:12:35.709801 master-0 kubenswrapper[7776]: I0219 03:12:35.709777 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c2b" event={"ID":"76529f4c-70b1-4fcb-ba48-ae929228f9fc","Type":"ContainerStarted","Data":"45197931f8b0fad8d3f78bcaed3a231713e7d574cb0f64bc503525eeb9919ca8"} Feb 19 03:12:35.711709 master-0 kubenswrapper[7776]: I0219 03:12:35.711361 7776 generic.go:334] "Generic (PLEG): container finished" podID="546cf649-8e0d-4c8a-a197-412db42e36b6" containerID="d5baecad6f9da9b942e37d06b6d9c3708141b102b9f1b98a457786b84bf2a523" exitCode=0 Feb 19 03:12:35.712550 master-0 kubenswrapper[7776]: I0219 03:12:35.712213 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqnbc" event={"ID":"546cf649-8e0d-4c8a-a197-412db42e36b6","Type":"ContainerDied","Data":"d5baecad6f9da9b942e37d06b6d9c3708141b102b9f1b98a457786b84bf2a523"} Feb 19 03:12:35.728406 master-0 kubenswrapper[7776]: I0219 03:12:35.728365 7776 scope.go:117] "RemoveContainer" containerID="02582b4f63c227af0cc551dd11287a8d643da3ea742ef92c54cd33d3e54ef1b5" Feb 19 03:12:35.749779 master-0 kubenswrapper[7776]: I0219 03:12:35.749738 7776 scope.go:117] "RemoveContainer" containerID="82448f06439f9a9b0f7eb645f89270cc41ab666e5f3da84a7cb3fe527c78ba9b" Feb 19 03:12:35.763107 master-0 kubenswrapper[7776]: I0219 03:12:35.763046 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:12:35.773760 master-0 kubenswrapper[7776]: I0219 03:12:35.773721 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spsn7"] Feb 19 03:12:35.847894 master-0 kubenswrapper[7776]: I0219 03:12:35.847807 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" path="/var/lib/kubelet/pods/543aef8d-960a-42c9-b1fd-954e2d024002/volumes" Feb 19 03:12:35.848488 master-0 kubenswrapper[7776]: I0219 03:12:35.848451 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76050135-a8a1-4968-9a00-2d251c17f8b8" path="/var/lib/kubelet/pods/76050135-a8a1-4968-9a00-2d251c17f8b8/volumes" Feb 19 03:12:36.048129 master-0 kubenswrapper[7776]: I0219 03:12:36.047966 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l"] Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: E0219 03:12:36.048202 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="extract-utilities" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: I0219 03:12:36.048216 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="extract-utilities" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: E0219 03:12:36.048230 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="registry-server" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: I0219 03:12:36.048238 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="registry-server" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: E0219 03:12:36.048281 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="extract-content" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: I0219 03:12:36.048290 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="extract-content" Feb 19 03:12:36.048677 master-0 kubenswrapper[7776]: I0219 03:12:36.048412 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="543aef8d-960a-42c9-b1fd-954e2d024002" containerName="registry-server" Feb 19 03:12:36.049303 master-0 kubenswrapper[7776]: I0219 03:12:36.049087 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.051554 master-0 kubenswrapper[7776]: I0219 03:12:36.051200 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 03:12:36.051554 master-0 kubenswrapper[7776]: I0219 03:12:36.051387 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-njtfp" Feb 19 03:12:36.065323 master-0 kubenswrapper[7776]: I0219 03:12:36.063583 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l"] Feb 19 03:12:36.115693 master-0 kubenswrapper[7776]: I0219 03:12:36.115644 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.116000 master-0 kubenswrapper[7776]: I0219 03:12:36.115965 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhlw\" (UniqueName: \"kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.116166 master-0 kubenswrapper[7776]: I0219 03:12:36.116145 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.217818 master-0 kubenswrapper[7776]: I0219 03:12:36.217765 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.218157 master-0 kubenswrapper[7776]: I0219 03:12:36.218135 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhlw\" (UniqueName: \"kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.218318 master-0 kubenswrapper[7776]: I0219 03:12:36.218303 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.219586 master-0 kubenswrapper[7776]: I0219 03:12:36.219539 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.220711 master-0 kubenswrapper[7776]: I0219 03:12:36.220665 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.238001 master-0 kubenswrapper[7776]: I0219 03:12:36.237923 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhlw\" (UniqueName: \"kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.385287 master-0 kubenswrapper[7776]: I0219 03:12:36.385201 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:12:36.721962 master-0 kubenswrapper[7776]: I0219 03:12:36.721902 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c2b" event={"ID":"76529f4c-70b1-4fcb-ba48-ae929228f9fc","Type":"ContainerStarted","Data":"b48adbbfe50d897c7f889b72b88a99b1525c43d6ccc956e7ebfd7866abe147be"} Feb 19 03:12:36.724326 master-0 kubenswrapper[7776]: I0219 03:12:36.724289 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqnbc" event={"ID":"546cf649-8e0d-4c8a-a197-412db42e36b6","Type":"ContainerStarted","Data":"ef0a9007227e02f27c0fbdb751ad5c29449e9b1fd82d980295aad79e15e072c2"} Feb 19 03:12:37.070693 master-0 kubenswrapper[7776]: I0219 03:12:37.070593 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l"] Feb 19 03:12:37.095353 master-0 kubenswrapper[7776]: W0219 03:12:37.095291 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bab5125_f4d7_4940_891f_9bb6a2145fac.slice/crio-ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126 WatchSource:0}: Error finding container ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126: Status 404 returned error can't find the container with id ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126 Feb 19 03:12:37.414961 master-0 kubenswrapper[7776]: I0219 03:12:37.414873 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g"] Feb 19 03:12:37.415644 master-0 kubenswrapper[7776]: I0219 03:12:37.415621 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:12:37.417868 master-0 kubenswrapper[7776]: I0219 03:12:37.417819 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-t6jnq"] Feb 19 03:12:37.418734 master-0 kubenswrapper[7776]: I0219 03:12:37.418706 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.420165 master-0 kubenswrapper[7776]: I0219 03:12:37.420132 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 03:12:37.421058 master-0 kubenswrapper[7776]: I0219 03:12:37.420991 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 03:12:37.421119 master-0 kubenswrapper[7776]: I0219 03:12:37.421007 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 03:12:37.421119 master-0 kubenswrapper[7776]: I0219 03:12:37.421089 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92"] Feb 19 03:12:37.421203 master-0 kubenswrapper[7776]: I0219 03:12:37.421173 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 03:12:37.421385 master-0 kubenswrapper[7776]: I0219 03:12:37.421358 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 03:12:37.421922 master-0 kubenswrapper[7776]: I0219 03:12:37.421886 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:37.422423 master-0 kubenswrapper[7776]: I0219 03:12:37.422395 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 03:12:37.423342 master-0 kubenswrapper[7776]: I0219 03:12:37.423315 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 19 03:12:37.433502 master-0 kubenswrapper[7776]: I0219 03:12:37.433452 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92"] Feb 19 03:12:37.434215 master-0 kubenswrapper[7776]: I0219 03:12:37.434188 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfgk\" (UniqueName: \"kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk\") pod \"network-check-source-58fb6744f5-mh46g\" (UID: \"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:12:37.434291 master-0 kubenswrapper[7776]: I0219 03:12:37.434235 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9hn\" (UniqueName: \"kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.434291 master-0 kubenswrapper[7776]: I0219 03:12:37.434277 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-4ms92\" (UID: \"ed2b5ced-d986-4622-9e0a-d39363629408\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:37.434365 master-0 kubenswrapper[7776]: I0219 03:12:37.434308 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.434365 master-0 kubenswrapper[7776]: I0219 03:12:37.434330 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.434365 master-0 kubenswrapper[7776]: I0219 03:12:37.434348 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.434522 master-0 kubenswrapper[7776]: I0219 03:12:37.434484 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.436238 master-0 kubenswrapper[7776]: I0219 03:12:37.436207 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g"] Feb 19 03:12:37.534992 master-0 kubenswrapper[7776]: I0219 03:12:37.534923 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfgk\" (UniqueName: \"kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk\") pod \"network-check-source-58fb6744f5-mh46g\" (UID: \"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:12:37.534992 master-0 kubenswrapper[7776]: I0219 03:12:37.534975 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9hn\" (UniqueName: \"kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.534992 master-0 kubenswrapper[7776]: I0219 03:12:37.534997 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-4ms92\" (UID: \"ed2b5ced-d986-4622-9e0a-d39363629408\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:37.535315 master-0 kubenswrapper[7776]: I0219 03:12:37.535185 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.535315 master-0 kubenswrapper[7776]: I0219 03:12:37.535298 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.535531 master-0 kubenswrapper[7776]: I0219 03:12:37.535497 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.535655 master-0 kubenswrapper[7776]: I0219 03:12:37.535626 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.536574 master-0 kubenswrapper[7776]: I0219 03:12:37.536543 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.538948 master-0 kubenswrapper[7776]: I0219 03:12:37.538911 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-4ms92\" (UID: \"ed2b5ced-d986-4622-9e0a-d39363629408\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:37.539001 master-0 kubenswrapper[7776]: I0219 03:12:37.538981 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.539875 master-0 kubenswrapper[7776]: I0219 03:12:37.539835 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.540969 master-0 kubenswrapper[7776]: I0219 03:12:37.540926 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.552020 master-0 kubenswrapper[7776]: I0219 03:12:37.551920 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9hn\" (UniqueName: \"kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.553267 master-0 kubenswrapper[7776]: I0219 03:12:37.553221 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfgk\" (UniqueName: \"kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk\") pod \"network-check-source-58fb6744f5-mh46g\" (UID: \"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:12:37.732425 master-0 kubenswrapper[7776]: I0219 03:12:37.732355 7776 generic.go:334] "Generic (PLEG): container finished" podID="546cf649-8e0d-4c8a-a197-412db42e36b6" containerID="ef0a9007227e02f27c0fbdb751ad5c29449e9b1fd82d980295aad79e15e072c2" exitCode=0 Feb 19 03:12:37.732884 master-0 kubenswrapper[7776]: I0219 03:12:37.732419 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqnbc" event={"ID":"546cf649-8e0d-4c8a-a197-412db42e36b6","Type":"ContainerDied","Data":"ef0a9007227e02f27c0fbdb751ad5c29449e9b1fd82d980295aad79e15e072c2"} Feb 19 03:12:37.734740 master-0 kubenswrapper[7776]: I0219 03:12:37.734291 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:12:37.737531 master-0 kubenswrapper[7776]: I0219 03:12:37.737481 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" event={"ID":"1bab5125-f4d7-4940-891f-9bb6a2145fac","Type":"ContainerStarted","Data":"06b73c6357719bfe669c305710619758b412fd14c971022a4d4f75d8ee13b201"} Feb 19 03:12:37.737590 master-0 kubenswrapper[7776]: I0219 03:12:37.737543 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" event={"ID":"1bab5125-f4d7-4940-891f-9bb6a2145fac","Type":"ContainerStarted","Data":"3f816c1572ccfc3f3a0c2886a9f8b1494e8a075f0736d90053bc4b30ff1b69bc"} Feb 19 03:12:37.737590 master-0 kubenswrapper[7776]: I0219 03:12:37.737558 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" event={"ID":"1bab5125-f4d7-4940-891f-9bb6a2145fac","Type":"ContainerStarted","Data":"ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126"} Feb 19 03:12:37.739949 master-0 kubenswrapper[7776]: I0219 03:12:37.739909 7776 generic.go:334] "Generic (PLEG): container finished" podID="76529f4c-70b1-4fcb-ba48-ae929228f9fc" containerID="b48adbbfe50d897c7f889b72b88a99b1525c43d6ccc956e7ebfd7866abe147be" exitCode=0 Feb 19 03:12:37.740034 master-0 kubenswrapper[7776]: I0219 03:12:37.739958 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c2b" event={"ID":"76529f4c-70b1-4fcb-ba48-ae929228f9fc","Type":"ContainerDied","Data":"b48adbbfe50d897c7f889b72b88a99b1525c43d6ccc956e7ebfd7866abe147be"} Feb 19 03:12:37.760743 master-0 kubenswrapper[7776]: I0219 03:12:37.760679 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:37.772866 master-0 kubenswrapper[7776]: I0219 03:12:37.772810 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" podStartSLOduration=1.772791867 podStartE2EDuration="1.772791867s" podCreationTimestamp="2026-02-19 03:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:12:37.771236532 +0000 UTC m=+464.110921090" watchObservedRunningTime="2026-02-19 03:12:37.772791867 +0000 UTC m=+464.112476385" Feb 19 03:12:37.784828 master-0 kubenswrapper[7776]: I0219 03:12:37.784771 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:37.811795 master-0 kubenswrapper[7776]: W0219 03:12:37.811660 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76470062_ab83_47ed_a669_deeb71996548.slice/crio-9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5 WatchSource:0}: Error finding container 9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5: Status 404 returned error can't find the container with id 9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5 Feb 19 03:12:38.059936 master-0 kubenswrapper[7776]: I0219 03:12:38.059875 7776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:12:38.169970 master-0 kubenswrapper[7776]: I0219 03:12:38.169913 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g"] Feb 19 03:12:38.188866 master-0 kubenswrapper[7776]: W0219 03:12:38.188812 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda71c6d42_5ff9_4e96_900c_6e2166bbc9e3.slice/crio-5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b WatchSource:0}: Error finding container 5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b: Status 404 returned error can't find the container with id 5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b Feb 19 03:12:38.247141 master-0 kubenswrapper[7776]: I0219 03:12:38.247078 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92"] Feb 19 03:12:38.747346 master-0 kubenswrapper[7776]: I0219 03:12:38.747298 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c2b" event={"ID":"76529f4c-70b1-4fcb-ba48-ae929228f9fc","Type":"ContainerStarted","Data":"642548a92bf507b86f80127ce38f881d64efa71a32321c4667859fccaf9e2b7a"} Feb 19 03:12:38.750267 master-0 kubenswrapper[7776]: I0219 03:12:38.750204 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqnbc" event={"ID":"546cf649-8e0d-4c8a-a197-412db42e36b6","Type":"ContainerStarted","Data":"5da7041787d0ac7439416da2b79084f23c4dcfeb808ec8aa3550f3e1b08b6518"} Feb 19 03:12:38.754329 master-0 kubenswrapper[7776]: I0219 03:12:38.754285 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5"} Feb 19 03:12:38.755829 master-0 kubenswrapper[7776]: I0219 03:12:38.755796 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" event={"ID":"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3","Type":"ContainerStarted","Data":"38d758b95eaa1aadab95a2d238649b742d620a8b5721fca13fce04712e502926"} Feb 19 03:12:38.755829 master-0 kubenswrapper[7776]: I0219 03:12:38.755825 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" event={"ID":"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3","Type":"ContainerStarted","Data":"5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b"} Feb 19 03:12:38.759131 master-0 kubenswrapper[7776]: I0219 03:12:38.759069 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" event={"ID":"ed2b5ced-d986-4622-9e0a-d39363629408","Type":"ContainerStarted","Data":"a2cbe0145530499aa6f2ee8bea7d745549e79916137a2b455baf26f9bb8aca75"} Feb 19 03:12:38.782690 master-0 kubenswrapper[7776]: I0219 03:12:38.782597 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9c2b" podStartSLOduration=2.318702731 podStartE2EDuration="4.782584211s" podCreationTimestamp="2026-02-19 03:12:34 +0000 UTC" firstStartedPulling="2026-02-19 03:12:35.71138911 +0000 UTC m=+462.051073678" lastFinishedPulling="2026-02-19 03:12:38.17527064 +0000 UTC m=+464.514955158" observedRunningTime="2026-02-19 03:12:38.780159521 +0000 UTC m=+465.119844039" watchObservedRunningTime="2026-02-19 03:12:38.782584211 +0000 UTC m=+465.122268729" Feb 19 03:12:38.799466 master-0 kubenswrapper[7776]: I0219 03:12:38.798772 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nqnbc" podStartSLOduration=3.391271401 podStartE2EDuration="5.798756622s" podCreationTimestamp="2026-02-19 03:12:33 +0000 UTC" firstStartedPulling="2026-02-19 03:12:35.713867732 +0000 UTC m=+462.053552250" lastFinishedPulling="2026-02-19 03:12:38.121352953 +0000 UTC m=+464.461037471" observedRunningTime="2026-02-19 03:12:38.798244437 +0000 UTC m=+465.137928955" watchObservedRunningTime="2026-02-19 03:12:38.798756622 +0000 UTC m=+465.138441140" Feb 19 03:12:38.815862 master-0 kubenswrapper[7776]: I0219 03:12:38.815780 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" podStartSLOduration=512.815760676 podStartE2EDuration="8m32.815760676s" podCreationTimestamp="2026-02-19 03:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:12:38.812014537 +0000 UTC m=+465.151699065" watchObservedRunningTime="2026-02-19 03:12:38.815760676 +0000 UTC m=+465.155445204" Feb 19 03:12:40.279125 master-0 kubenswrapper[7776]: I0219 03:12:40.279081 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/0.log" Feb 19 03:12:40.487861 master-0 kubenswrapper[7776]: I0219 03:12:40.487806 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/1.log" Feb 19 03:12:40.501530 master-0 kubenswrapper[7776]: I0219 03:12:40.501463 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m64bf"] Feb 19 03:12:40.502221 master-0 kubenswrapper[7776]: I0219 03:12:40.502194 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.504117 master-0 kubenswrapper[7776]: I0219 03:12:40.504081 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-g8fsd" Feb 19 03:12:40.504744 master-0 kubenswrapper[7776]: I0219 03:12:40.504720 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 03:12:40.504797 master-0 kubenswrapper[7776]: I0219 03:12:40.504765 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 03:12:40.695063 master-0 kubenswrapper[7776]: I0219 03:12:40.695017 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfd9\" (UniqueName: \"kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.695469 master-0 kubenswrapper[7776]: I0219 03:12:40.695449 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.695607 master-0 kubenswrapper[7776]: I0219 03:12:40.695591 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.774360 master-0 kubenswrapper[7776]: I0219 03:12:40.774305 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d"} Feb 19 03:12:40.776229 master-0 kubenswrapper[7776]: I0219 03:12:40.776005 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" event={"ID":"ed2b5ced-d986-4622-9e0a-d39363629408","Type":"ContainerStarted","Data":"0cc33cbf618043aff9c8619ca792a114e5259511a82a1ec805d38ee833b2f9cb"} Feb 19 03:12:40.776490 master-0 kubenswrapper[7776]: I0219 03:12:40.776471 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:40.781697 master-0 kubenswrapper[7776]: I0219 03:12:40.781638 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:12:40.796399 master-0 kubenswrapper[7776]: I0219 03:12:40.796351 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfd9\" (UniqueName: \"kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.796695 master-0 kubenswrapper[7776]: I0219 03:12:40.796677 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.797441 master-0 kubenswrapper[7776]: I0219 03:12:40.797425 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.799228 master-0 kubenswrapper[7776]: I0219 03:12:40.799115 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podStartSLOduration=432.241432635 podStartE2EDuration="7m14.799099783s" podCreationTimestamp="2026-02-19 03:05:26 +0000 UTC" firstStartedPulling="2026-02-19 03:12:37.817065204 +0000 UTC m=+464.156749722" lastFinishedPulling="2026-02-19 03:12:40.374732362 +0000 UTC m=+466.714416870" observedRunningTime="2026-02-19 03:12:40.79588205 +0000 UTC m=+467.135566648" watchObservedRunningTime="2026-02-19 03:12:40.799099783 +0000 UTC m=+467.138784301" Feb 19 03:12:40.802521 master-0 kubenswrapper[7776]: I0219 03:12:40.802473 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.802656 master-0 kubenswrapper[7776]: I0219 03:12:40.802600 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.818901 master-0 kubenswrapper[7776]: I0219 03:12:40.818830 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfd9\" (UniqueName: \"kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.819513 master-0 kubenswrapper[7776]: I0219 03:12:40.819436 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" podStartSLOduration=419.70278142 podStartE2EDuration="7m1.819414464s" podCreationTimestamp="2026-02-19 03:05:39 +0000 UTC" firstStartedPulling="2026-02-19 03:12:38.258069418 +0000 UTC m=+464.597753936" lastFinishedPulling="2026-02-19 03:12:40.374702462 +0000 UTC m=+466.714386980" observedRunningTime="2026-02-19 03:12:40.816554461 +0000 UTC m=+467.156238989" watchObservedRunningTime="2026-02-19 03:12:40.819414464 +0000 UTC m=+467.159098992" Feb 19 03:12:40.827379 master-0 kubenswrapper[7776]: I0219 03:12:40.826895 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:12:40.855367 master-0 kubenswrapper[7776]: W0219 03:12:40.854901 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca08cc0_cc64_4e13_9465_c9b0bfacb60d.slice/crio-a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3 WatchSource:0}: Error finding container a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3: Status 404 returned error can't find the container with id a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3 Feb 19 03:12:40.876595 master-0 kubenswrapper[7776]: I0219 03:12:40.875866 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/fix-audit-permissions/0.log" Feb 19 03:12:40.999987 master-0 kubenswrapper[7776]: I0219 03:12:40.999741 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:12:41.000188 master-0 kubenswrapper[7776]: E0219 03:12:41.000130 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:12:41.000279 master-0 kubenswrapper[7776]: E0219 03:12:41.000231 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:13:45.000207742 +0000 UTC m=+531.339892290 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:12:41.079090 master-0 kubenswrapper[7776]: I0219 03:12:41.078971 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/oauth-apiserver/0.log" Feb 19 03:12:41.101703 master-0 kubenswrapper[7776]: I0219 03:12:41.101622 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") pod \"machine-approver-798b897698-hmpmj\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:12:41.101703 master-0 kubenswrapper[7776]: I0219 03:12:41.101691 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:12:41.101998 master-0 kubenswrapper[7776]: I0219 03:12:41.101941 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:12:41.102042 master-0 kubenswrapper[7776]: E0219 03:12:41.102030 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:12:41.102102 master-0 kubenswrapper[7776]: E0219 03:12:41.102081 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:13:45.102066064 +0000 UTC m=+531.441750582 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:12:41.102162 master-0 kubenswrapper[7776]: E0219 03:12:41.102118 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:12:41.102244 master-0 kubenswrapper[7776]: E0219 03:12:41.102224 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:13:45.102200108 +0000 UTC m=+531.441884656 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:12:41.102369 master-0 kubenswrapper[7776]: E0219 03:12:41.102294 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:41.102495 master-0 kubenswrapper[7776]: E0219 03:12:41.102450 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls podName:afee48d5-7b45-42ef-acc8-e591ec479974 nodeName:}" failed. No retries permitted until 2026-02-19 03:13:45.102424424 +0000 UTC m=+531.442108992 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls") pod "machine-approver-798b897698-hmpmj" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974") : secret "machine-approver-tls" not found Feb 19 03:12:41.332864 master-0 kubenswrapper[7776]: I0219 03:12:41.275304 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:12:41.332864 master-0 kubenswrapper[7776]: I0219 03:12:41.303346 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:41.332864 master-0 kubenswrapper[7776]: I0219 03:12:41.303411 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:41.385605 master-0 kubenswrapper[7776]: I0219 03:12:41.385100 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:41.478141 master-0 kubenswrapper[7776]: I0219 03:12:41.478099 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/4.log" Feb 19 03:12:41.490717 master-0 kubenswrapper[7776]: I0219 03:12:41.490647 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-tkbxr"] Feb 19 03:12:41.491863 master-0 kubenswrapper[7776]: I0219 03:12:41.491821 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.495235 master-0 kubenswrapper[7776]: I0219 03:12:41.495194 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6bg2z" Feb 19 03:12:41.495623 master-0 kubenswrapper[7776]: I0219 03:12:41.495582 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 19 03:12:41.495764 master-0 kubenswrapper[7776]: I0219 03:12:41.495743 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 19 03:12:41.495956 master-0 kubenswrapper[7776]: I0219 03:12:41.495934 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 19 03:12:41.503513 master-0 kubenswrapper[7776]: I0219 03:12:41.503337 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-tkbxr"] Feb 19 03:12:41.508406 master-0 kubenswrapper[7776]: I0219 03:12:41.508356 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.508526 master-0 kubenswrapper[7776]: I0219 03:12:41.508449 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.508576 master-0 kubenswrapper[7776]: I0219 03:12:41.508536 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.508820 master-0 kubenswrapper[7776]: I0219 03:12:41.508772 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgff\" (UniqueName: \"kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.610066 master-0 kubenswrapper[7776]: I0219 03:12:41.609905 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.610066 master-0 kubenswrapper[7776]: I0219 03:12:41.610003 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.610066 master-0 kubenswrapper[7776]: I0219 03:12:41.610041 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgff\" (UniqueName: \"kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.610396 master-0 kubenswrapper[7776]: I0219 03:12:41.610080 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.610590 master-0 kubenswrapper[7776]: E0219 03:12:41.610533 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:41.610663 master-0 kubenswrapper[7776]: E0219 03:12:41.610645 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:42.110620063 +0000 UTC m=+468.450304581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:41.611330 master-0 kubenswrapper[7776]: I0219 03:12:41.611284 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.613230 master-0 kubenswrapper[7776]: I0219 03:12:41.613178 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.632421 master-0 kubenswrapper[7776]: I0219 03:12:41.632360 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgff\" (UniqueName: \"kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:41.761340 master-0 kubenswrapper[7776]: I0219 03:12:41.761267 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:41.763363 master-0 kubenswrapper[7776]: I0219 03:12:41.763321 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:41.763363 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:41.763363 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:41.763363 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:41.763578 master-0 kubenswrapper[7776]: I0219 03:12:41.763375 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:41.781832 master-0 kubenswrapper[7776]: I0219 03:12:41.781751 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m64bf" event={"ID":"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d","Type":"ContainerStarted","Data":"9e24a313def2e8cc6bfe6ea74d440f9e7ce1c6090dba902dbb4c65d3a7e10678"} Feb 19 03:12:41.781832 master-0 kubenswrapper[7776]: I0219 03:12:41.781816 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m64bf" event={"ID":"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d","Type":"ContainerStarted","Data":"a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3"} Feb 19 03:12:41.816567 master-0 kubenswrapper[7776]: I0219 03:12:41.816515 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:12:41.893233 master-0 kubenswrapper[7776]: I0219 03:12:41.893172 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:41.893446 master-0 kubenswrapper[7776]: I0219 03:12:41.893246 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:41.929435 master-0 kubenswrapper[7776]: I0219 03:12:41.929338 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:41.962196 master-0 kubenswrapper[7776]: I0219 03:12:41.962138 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/setup/0.log" Feb 19 03:12:41.995852 master-0 kubenswrapper[7776]: I0219 03:12:41.995795 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-ensure-env-vars/0.log" Feb 19 03:12:42.026875 master-0 kubenswrapper[7776]: I0219 03:12:42.026786 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m64bf" podStartSLOduration=2.026765204 podStartE2EDuration="2.026765204s" podCreationTimestamp="2026-02-19 03:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:12:41.997882524 +0000 UTC m=+468.337567032" watchObservedRunningTime="2026-02-19 03:12:42.026765204 +0000 UTC m=+468.366449742" Feb 19 03:12:42.075464 master-0 kubenswrapper[7776]: I0219 03:12:42.075401 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-resources-copy/0.log" Feb 19 03:12:42.117189 master-0 kubenswrapper[7776]: I0219 03:12:42.117110 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:42.117534 master-0 kubenswrapper[7776]: E0219 03:12:42.117352 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:42.117534 master-0 kubenswrapper[7776]: E0219 03:12:42.117443 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:43.1174222 +0000 UTC m=+469.457106728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:42.264531 master-0 kubenswrapper[7776]: I0219 03:12:42.264344 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj"] Feb 19 03:12:42.264917 master-0 kubenswrapper[7776]: E0219 03:12:42.264861 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" podUID="afee48d5-7b45-42ef-acc8-e591ec479974" Feb 19 03:12:42.277548 master-0 kubenswrapper[7776]: I0219 03:12:42.277500 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 19 03:12:42.481088 master-0 kubenswrapper[7776]: I0219 03:12:42.481010 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 19 03:12:42.677034 master-0 kubenswrapper[7776]: I0219 03:12:42.676970 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:12:42.724194 master-0 kubenswrapper[7776]: I0219 03:12:42.724127 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:12:42.724383 master-0 kubenswrapper[7776]: E0219 03:12:42.724344 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:12:42.724440 master-0 kubenswrapper[7776]: E0219 03:12:42.724430 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:13:46.724412822 +0000 UTC m=+533.064097340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:12:42.763584 master-0 kubenswrapper[7776]: I0219 03:12:42.763511 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:42.763584 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:42.763584 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:42.763584 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:42.763836 master-0 kubenswrapper[7776]: I0219 03:12:42.763591 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:42.796900 master-0 kubenswrapper[7776]: I0219 03:12:42.796830 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:12:42.808192 master-0 kubenswrapper[7776]: I0219 03:12:42.808142 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:12:42.845282 master-0 kubenswrapper[7776]: I0219 03:12:42.843768 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:12:42.876389 master-0 kubenswrapper[7776]: I0219 03:12:42.876342 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-readyz/0.log" Feb 19 03:12:42.928374 master-0 kubenswrapper[7776]: I0219 03:12:42.928113 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldf2\" (UniqueName: \"kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2\") pod \"afee48d5-7b45-42ef-acc8-e591ec479974\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " Feb 19 03:12:42.928374 master-0 kubenswrapper[7776]: I0219 03:12:42.928175 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config\") pod \"afee48d5-7b45-42ef-acc8-e591ec479974\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " Feb 19 03:12:42.928374 master-0 kubenswrapper[7776]: I0219 03:12:42.928203 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config\") pod \"afee48d5-7b45-42ef-acc8-e591ec479974\" (UID: \"afee48d5-7b45-42ef-acc8-e591ec479974\") " Feb 19 03:12:42.929201 master-0 kubenswrapper[7776]: I0219 03:12:42.928938 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config" (OuterVolumeSpecName: "config") pod "afee48d5-7b45-42ef-acc8-e591ec479974" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:12:42.929201 master-0 kubenswrapper[7776]: I0219 03:12:42.928987 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "afee48d5-7b45-42ef-acc8-e591ec479974" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:12:42.930776 master-0 kubenswrapper[7776]: I0219 03:12:42.930730 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2" (OuterVolumeSpecName: "kube-api-access-jldf2") pod "afee48d5-7b45-42ef-acc8-e591ec479974" (UID: "afee48d5-7b45-42ef-acc8-e591ec479974"). InnerVolumeSpecName "kube-api-access-jldf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:43.029447 master-0 kubenswrapper[7776]: I0219 03:12:43.029399 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldf2\" (UniqueName: \"kubernetes.io/projected/afee48d5-7b45-42ef-acc8-e591ec479974-kube-api-access-jldf2\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:43.029447 master-0 kubenswrapper[7776]: I0219 03:12:43.029439 7776 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:43.029447 master-0 kubenswrapper[7776]: I0219 03:12:43.029449 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afee48d5-7b45-42ef-acc8-e591ec479974-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:43.074586 master-0 kubenswrapper[7776]: I0219 03:12:43.074540 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:12:43.130549 master-0 kubenswrapper[7776]: I0219 03:12:43.130488 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:43.130808 master-0 kubenswrapper[7776]: E0219 03:12:43.130663 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:43.130808 master-0 kubenswrapper[7776]: E0219 03:12:43.130727 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:45.130708827 +0000 UTC m=+471.470393345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:43.279248 master-0 kubenswrapper[7776]: I0219 03:12:43.279107 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_2561caa0-5f79-496e-8fa7-a9692dca20be/installer/0.log" Feb 19 03:12:43.475538 master-0 kubenswrapper[7776]: I0219 03:12:43.475499 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:12:43.676560 master-0 kubenswrapper[7776]: I0219 03:12:43.676517 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/3.log" Feb 19 03:12:43.764062 master-0 kubenswrapper[7776]: I0219 03:12:43.764011 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:43.764062 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:43.764062 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:43.764062 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:43.764487 master-0 kubenswrapper[7776]: I0219 03:12:43.764451 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:43.802127 master-0 kubenswrapper[7776]: I0219 03:12:43.802088 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj" Feb 19 03:12:44.125567 master-0 kubenswrapper[7776]: I0219 03:12:44.125514 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:44.125934 master-0 kubenswrapper[7776]: I0219 03:12:44.125906 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:44.191367 master-0 kubenswrapper[7776]: I0219 03:12:44.191330 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:44.387660 master-0 kubenswrapper[7776]: I0219 03:12:44.387516 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 19 03:12:44.418188 master-0 kubenswrapper[7776]: I0219 03:12:44.418111 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 19 03:12:44.600972 master-0 kubenswrapper[7776]: I0219 03:12:44.600896 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj"] Feb 19 03:12:44.610815 master-0 kubenswrapper[7776]: I0219 03:12:44.610695 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 19 03:12:44.612307 master-0 kubenswrapper[7776]: I0219 03:12:44.612233 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj"] Feb 19 03:12:44.629277 master-0 kubenswrapper[7776]: I0219 03:12:44.626173 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:12:44.652125 master-0 kubenswrapper[7776]: I0219 03:12:44.652004 7776 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afee48d5-7b45-42ef-acc8-e591ec479974-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:44.686893 master-0 kubenswrapper[7776]: I0219 03:12:44.686839 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc"] Feb 19 03:12:44.688280 master-0 kubenswrapper[7776]: I0219 03:12:44.688205 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.688635 master-0 kubenswrapper[7776]: I0219 03:12:44.688591 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:12:44.690533 master-0 kubenswrapper[7776]: I0219 03:12:44.690501 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 03:12:44.690777 master-0 kubenswrapper[7776]: I0219 03:12:44.690756 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 03:12:44.691117 master-0 kubenswrapper[7776]: I0219 03:12:44.691069 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 03:12:44.691117 master-0 kubenswrapper[7776]: I0219 03:12:44.691082 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 03:12:44.691288 master-0 kubenswrapper[7776]: I0219 03:12:44.691138 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7wq8f" Feb 19 03:12:44.691288 master-0 kubenswrapper[7776]: I0219 03:12:44.691230 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 03:12:44.752978 master-0 kubenswrapper[7776]: I0219 03:12:44.752912 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.752978 master-0 kubenswrapper[7776]: I0219 03:12:44.752975 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.752978 master-0 kubenswrapper[7776]: I0219 03:12:44.752996 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.753413 master-0 kubenswrapper[7776]: I0219 03:12:44.753027 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78j6f\" (UniqueName: \"kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.763632 master-0 kubenswrapper[7776]: I0219 03:12:44.763576 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:44.763632 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:44.763632 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:44.763632 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:44.763881 master-0 kubenswrapper[7776]: I0219 03:12:44.763652 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:44.769861 master-0 kubenswrapper[7776]: I0219 03:12:44.769815 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:44.769954 master-0 kubenswrapper[7776]: I0219 03:12:44.769884 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:44.826285 master-0 kubenswrapper[7776]: I0219 03:12:44.826221 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:44.844490 master-0 kubenswrapper[7776]: I0219 03:12:44.844446 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:12:44.854005 master-0 kubenswrapper[7776]: I0219 03:12:44.853912 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.854367 master-0 kubenswrapper[7776]: I0219 03:12:44.854288 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.854482 master-0 kubenswrapper[7776]: I0219 03:12:44.854442 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78j6f\" (UniqueName: \"kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.854698 master-0 kubenswrapper[7776]: E0219 03:12:44.854605 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:44.854698 master-0 kubenswrapper[7776]: E0219 03:12:44.854713 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:12:45.354685826 +0000 UTC m=+471.694370344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:12:44.856156 master-0 kubenswrapper[7776]: I0219 03:12:44.855641 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.856156 master-0 kubenswrapper[7776]: I0219 03:12:44.855871 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.856453 master-0 kubenswrapper[7776]: I0219 03:12:44.856364 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:44.872489 master-0 kubenswrapper[7776]: I0219 03:12:44.872451 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:12:44.970484 master-0 kubenswrapper[7776]: I0219 03:12:44.970226 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:12:44.977518 master-0 kubenswrapper[7776]: I0219 03:12:44.977460 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78j6f\" (UniqueName: \"kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:45.168098 master-0 kubenswrapper[7776]: I0219 03:12:45.168011 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:45.168654 master-0 kubenswrapper[7776]: E0219 03:12:45.168283 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:45.168654 master-0 kubenswrapper[7776]: E0219 03:12:45.168380 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:49.168354415 +0000 UTC m=+475.508038973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:45.287884 master-0 kubenswrapper[7776]: I0219 03:12:45.287743 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/3.log" Feb 19 03:12:45.371465 master-0 kubenswrapper[7776]: I0219 03:12:45.371374 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:45.371722 master-0 kubenswrapper[7776]: E0219 03:12:45.371567 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:45.371722 master-0 kubenswrapper[7776]: E0219 03:12:45.371672 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:12:46.371644768 +0000 UTC m=+472.711329316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:12:45.764092 master-0 kubenswrapper[7776]: I0219 03:12:45.764037 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:45.764092 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:45.764092 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:45.764092 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:45.764834 master-0 kubenswrapper[7776]: I0219 03:12:45.764103 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:45.843998 master-0 kubenswrapper[7776]: I0219 03:12:45.843929 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:45.844364 master-0 kubenswrapper[7776]: E0219 03:12:45.844314 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" Feb 19 03:12:45.854184 master-0 kubenswrapper[7776]: I0219 03:12:45.854125 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afee48d5-7b45-42ef-acc8-e591ec479974" path="/var/lib/kubelet/pods/afee48d5-7b45-42ef-acc8-e591ec479974/volumes" Feb 19 03:12:46.385121 master-0 kubenswrapper[7776]: I0219 03:12:46.385057 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:46.385444 master-0 kubenswrapper[7776]: E0219 03:12:46.385383 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:46.385572 master-0 kubenswrapper[7776]: E0219 03:12:46.385536 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:12:48.385498374 +0000 UTC m=+474.725182962 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:12:46.569375 master-0 kubenswrapper[7776]: I0219 03:12:46.567684 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 19 03:12:46.596078 master-0 kubenswrapper[7776]: I0219 03:12:46.596018 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 19 03:12:46.608250 master-0 kubenswrapper[7776]: I0219 03:12:46.608196 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/4.log" Feb 19 03:12:46.617615 master-0 kubenswrapper[7776]: I0219 03:12:46.617559 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/1.log" Feb 19 03:12:46.632690 master-0 kubenswrapper[7776]: I0219 03:12:46.632582 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 19 03:12:46.644796 master-0 kubenswrapper[7776]: I0219 03:12:46.644724 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 19 03:12:46.651977 master-0 kubenswrapper[7776]: I0219 03:12:46.651932 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:12:46.675801 master-0 kubenswrapper[7776]: I0219 03:12:46.675749 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:12:46.768183 master-0 kubenswrapper[7776]: I0219 03:12:46.768135 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:46.768183 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:46.768183 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:46.768183 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:46.768823 master-0 kubenswrapper[7776]: I0219 03:12:46.768211 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:46.876351 master-0 kubenswrapper[7776]: I0219 03:12:46.876307 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/3.log" Feb 19 03:12:47.276235 master-0 kubenswrapper[7776]: I0219 03:12:47.276144 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/0.log" Feb 19 03:12:47.480048 master-0 kubenswrapper[7776]: I0219 03:12:47.479996 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/kube-rbac-proxy/0.log" Feb 19 03:12:47.675421 master-0 kubenswrapper[7776]: I0219 03:12:47.675317 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/1.log" Feb 19 03:12:47.761477 master-0 kubenswrapper[7776]: I0219 03:12:47.761404 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:12:47.763205 master-0 kubenswrapper[7776]: I0219 03:12:47.763149 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:47.763205 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:47.763205 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:47.763205 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:47.763351 master-0 kubenswrapper[7776]: I0219 03:12:47.763246 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:47.877081 master-0 kubenswrapper[7776]: I0219 03:12:47.877022 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/0.log" Feb 19 03:12:48.175860 master-0 kubenswrapper[7776]: I0219 03:12:48.175791 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p"] Feb 19 03:12:48.176443 master-0 kubenswrapper[7776]: I0219 03:12:48.176020 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="cluster-cloud-controller-manager" containerID="cri-o://12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" gracePeriod=30 Feb 19 03:12:48.176443 master-0 kubenswrapper[7776]: I0219 03:12:48.176352 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="config-sync-controllers" containerID="cri-o://5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" gracePeriod=30 Feb 19 03:12:48.279732 master-0 kubenswrapper[7776]: I0219 03:12:48.279633 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/1.log" Feb 19 03:12:48.342121 master-0 kubenswrapper[7776]: I0219 03:12:48.342069 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/3.log" Feb 19 03:12:48.343003 master-0 kubenswrapper[7776]: I0219 03:12:48.342960 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:12:48.417337 master-0 kubenswrapper[7776]: I0219 03:12:48.417238 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube\") pod \"72a6892f-5a69-434b-9dea-11ad5de62a40\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " Feb 19 03:12:48.417580 master-0 kubenswrapper[7776]: I0219 03:12:48.417353 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "72a6892f-5a69-434b-9dea-11ad5de62a40" (UID: "72a6892f-5a69-434b-9dea-11ad5de62a40"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:12:48.417580 master-0 kubenswrapper[7776]: I0219 03:12:48.417374 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images\") pod \"72a6892f-5a69-434b-9dea-11ad5de62a40\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " Feb 19 03:12:48.417580 master-0 kubenswrapper[7776]: I0219 03:12:48.417468 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config\") pod \"72a6892f-5a69-434b-9dea-11ad5de62a40\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " Feb 19 03:12:48.417580 master-0 kubenswrapper[7776]: I0219 03:12:48.417503 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7knmd\" (UniqueName: \"kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd\") pod \"72a6892f-5a69-434b-9dea-11ad5de62a40\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " Feb 19 03:12:48.417580 master-0 kubenswrapper[7776]: I0219 03:12:48.417555 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls\") pod \"72a6892f-5a69-434b-9dea-11ad5de62a40\" (UID: \"72a6892f-5a69-434b-9dea-11ad5de62a40\") " Feb 19 03:12:48.417790 master-0 kubenswrapper[7776]: I0219 03:12:48.417711 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:48.417912 master-0 kubenswrapper[7776]: I0219 03:12:48.417885 7776 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/72a6892f-5a69-434b-9dea-11ad5de62a40-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:48.417912 master-0 kubenswrapper[7776]: I0219 03:12:48.417894 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images" (OuterVolumeSpecName: "images") pod "72a6892f-5a69-434b-9dea-11ad5de62a40" (UID: "72a6892f-5a69-434b-9dea-11ad5de62a40"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:12:48.418013 master-0 kubenswrapper[7776]: E0219 03:12:48.417993 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:48.418063 master-0 kubenswrapper[7776]: E0219 03:12:48.418051 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:12:52.418034605 +0000 UTC m=+478.757719123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:12:48.418493 master-0 kubenswrapper[7776]: I0219 03:12:48.418437 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "72a6892f-5a69-434b-9dea-11ad5de62a40" (UID: "72a6892f-5a69-434b-9dea-11ad5de62a40"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:12:48.421992 master-0 kubenswrapper[7776]: I0219 03:12:48.421942 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "72a6892f-5a69-434b-9dea-11ad5de62a40" (UID: "72a6892f-5a69-434b-9dea-11ad5de62a40"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:12:48.422327 master-0 kubenswrapper[7776]: I0219 03:12:48.422276 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd" (OuterVolumeSpecName: "kube-api-access-7knmd") pod "72a6892f-5a69-434b-9dea-11ad5de62a40" (UID: "72a6892f-5a69-434b-9dea-11ad5de62a40"). InnerVolumeSpecName "kube-api-access-7knmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:12:48.479062 master-0 kubenswrapper[7776]: I0219 03:12:48.479016 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/kube-rbac-proxy/0.log" Feb 19 03:12:48.519816 master-0 kubenswrapper[7776]: I0219 03:12:48.519702 7776 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:48.519816 master-0 kubenswrapper[7776]: I0219 03:12:48.519790 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7knmd\" (UniqueName: \"kubernetes.io/projected/72a6892f-5a69-434b-9dea-11ad5de62a40-kube-api-access-7knmd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:48.519816 master-0 kubenswrapper[7776]: I0219 03:12:48.519811 7776 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/72a6892f-5a69-434b-9dea-11ad5de62a40-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:48.520086 master-0 kubenswrapper[7776]: I0219 03:12:48.519832 7776 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72a6892f-5a69-434b-9dea-11ad5de62a40-images\") on node \"master-0\" DevicePath \"\"" Feb 19 03:12:48.763164 master-0 kubenswrapper[7776]: I0219 03:12:48.763052 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:48.763164 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:48.763164 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:48.763164 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:48.763164 master-0 kubenswrapper[7776]: I0219 03:12:48.763108 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:48.832319 master-0 kubenswrapper[7776]: I0219 03:12:48.832281 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_72a6892f-5a69-434b-9dea-11ad5de62a40/kube-rbac-proxy/3.log" Feb 19 03:12:48.832979 master-0 kubenswrapper[7776]: I0219 03:12:48.832952 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" exitCode=0 Feb 19 03:12:48.832979 master-0 kubenswrapper[7776]: I0219 03:12:48.832973 7776 generic.go:334] "Generic (PLEG): container finished" podID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerID="12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" exitCode=0 Feb 19 03:12:48.833084 master-0 kubenswrapper[7776]: I0219 03:12:48.832991 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037"} Feb 19 03:12:48.833084 master-0 kubenswrapper[7776]: I0219 03:12:48.833016 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b"} Feb 19 03:12:48.833084 master-0 kubenswrapper[7776]: I0219 03:12:48.833026 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" event={"ID":"72a6892f-5a69-434b-9dea-11ad5de62a40","Type":"ContainerDied","Data":"bb034bf4a9cdadabbefc696317954b87b73697b914e5e75bb4ca97aab23c5ac6"} Feb 19 03:12:48.833084 master-0 kubenswrapper[7776]: I0219 03:12:48.833045 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:48.833316 master-0 kubenswrapper[7776]: I0219 03:12:48.833190 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p" Feb 19 03:12:48.857012 master-0 kubenswrapper[7776]: I0219 03:12:48.856954 7776 scope.go:117] "RemoveContainer" containerID="5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" Feb 19 03:12:48.887575 master-0 kubenswrapper[7776]: I0219 03:12:48.887536 7776 scope.go:117] "RemoveContainer" containerID="12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" Feb 19 03:12:48.896421 master-0 kubenswrapper[7776]: I0219 03:12:48.896128 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p"] Feb 19 03:12:48.907533 master-0 kubenswrapper[7776]: I0219 03:12:48.907483 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p"] Feb 19 03:12:48.923491 master-0 kubenswrapper[7776]: I0219 03:12:48.923448 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:48.924112 master-0 kubenswrapper[7776]: E0219 03:12:48.924075 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11\": container with ID starting with 5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11 not found: ID does not exist" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:48.924175 master-0 kubenswrapper[7776]: I0219 03:12:48.924123 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11"} err="failed to get container status \"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11\": rpc error: code = NotFound desc = could not find container \"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11\": container with ID starting with 5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11 not found: ID does not exist" Feb 19 03:12:48.924175 master-0 kubenswrapper[7776]: I0219 03:12:48.924153 7776 scope.go:117] "RemoveContainer" containerID="5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" Feb 19 03:12:48.924765 master-0 kubenswrapper[7776]: E0219 03:12:48.924725 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037\": container with ID starting with 5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037 not found: ID does not exist" containerID="5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" Feb 19 03:12:48.924834 master-0 kubenswrapper[7776]: I0219 03:12:48.924756 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037"} err="failed to get container status \"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037\": rpc error: code = NotFound desc = could not find container \"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037\": container with ID starting with 5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037 not found: ID does not exist" Feb 19 03:12:48.924834 master-0 kubenswrapper[7776]: I0219 03:12:48.924781 7776 scope.go:117] "RemoveContainer" containerID="12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" Feb 19 03:12:48.925190 master-0 kubenswrapper[7776]: E0219 03:12:48.925151 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b\": container with ID starting with 12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b not found: ID does not exist" containerID="12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" Feb 19 03:12:48.925241 master-0 kubenswrapper[7776]: I0219 03:12:48.925186 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b"} err="failed to get container status \"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b\": rpc error: code = NotFound desc = could not find container \"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b\": container with ID starting with 12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b not found: ID does not exist" Feb 19 03:12:48.925241 master-0 kubenswrapper[7776]: I0219 03:12:48.925205 7776 scope.go:117] "RemoveContainer" containerID="5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11" Feb 19 03:12:48.925723 master-0 kubenswrapper[7776]: I0219 03:12:48.925666 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11"} err="failed to get container status \"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11\": rpc error: code = NotFound desc = could not find container \"5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11\": container with ID starting with 5529b98414d662a9cc0972b06b3df8cc65469cce83c638edceb32ef3e87b9f11 not found: ID does not exist" Feb 19 03:12:48.925723 master-0 kubenswrapper[7776]: I0219 03:12:48.925713 7776 scope.go:117] "RemoveContainer" containerID="5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037" Feb 19 03:12:48.926108 master-0 kubenswrapper[7776]: I0219 03:12:48.926059 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037"} err="failed to get container status \"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037\": rpc error: code = NotFound desc = could not find container \"5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037\": container with ID starting with 5bc30e4426b0c71172cfaab844e3727cb2bafe4f910a7a61776d779d0b2f4037 not found: ID does not exist" Feb 19 03:12:48.926108 master-0 kubenswrapper[7776]: I0219 03:12:48.926098 7776 scope.go:117] "RemoveContainer" containerID="12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b" Feb 19 03:12:48.926394 master-0 kubenswrapper[7776]: I0219 03:12:48.926355 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b"} err="failed to get container status \"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b\": rpc error: code = NotFound desc = could not find container \"12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b\": container with ID starting with 12eaab8a80e4cd9c13a9fdcf198ebb3908f3df9563070aeb359a2036603cae0b not found: ID does not exist" Feb 19 03:12:48.959692 master-0 kubenswrapper[7776]: I0219 03:12:48.959628 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t"] Feb 19 03:12:48.959913 master-0 kubenswrapper[7776]: E0219 03:12:48.959903 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: I0219 03:12:48.959916 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: E0219 03:12:48.959927 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="cluster-cloud-controller-manager" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: I0219 03:12:48.959935 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="cluster-cloud-controller-manager" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: E0219 03:12:48.959973 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: I0219 03:12:48.959980 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: E0219 03:12:48.959992 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.959996 master-0 kubenswrapper[7776]: I0219 03:12:48.959999 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: E0219 03:12:48.960011 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960018 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: E0219 03:12:48.960030 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="config-sync-controllers" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960039 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="config-sync-controllers" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960150 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="cluster-cloud-controller-manager" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960165 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="config-sync-controllers" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960180 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960192 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960200 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.960303 master-0 kubenswrapper[7776]: I0219 03:12:48.960211 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" containerName="kube-rbac-proxy" Feb 19 03:12:48.961058 master-0 kubenswrapper[7776]: I0219 03:12:48.961034 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:48.963009 master-0 kubenswrapper[7776]: I0219 03:12:48.962963 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 19 03:12:48.965070 master-0 kubenswrapper[7776]: I0219 03:12:48.965032 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-x7jvh" Feb 19 03:12:48.965599 master-0 kubenswrapper[7776]: I0219 03:12:48.965559 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 19 03:12:48.965674 master-0 kubenswrapper[7776]: I0219 03:12:48.965564 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:12:48.967361 master-0 kubenswrapper[7776]: I0219 03:12:48.967335 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 19 03:12:48.968833 master-0 kubenswrapper[7776]: I0219 03:12:48.968800 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:12:49.026186 master-0 kubenswrapper[7776]: I0219 03:12:49.026053 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.026186 master-0 kubenswrapper[7776]: I0219 03:12:49.026178 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.026472 master-0 kubenswrapper[7776]: I0219 03:12:49.026237 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.026472 master-0 kubenswrapper[7776]: I0219 03:12:49.026322 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.026472 master-0 kubenswrapper[7776]: I0219 03:12:49.026459 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhg6\" (UniqueName: \"kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.082638 master-0 kubenswrapper[7776]: I0219 03:12:49.082566 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-5bd7768f54-f8dfs_1f9e07d3-d157-4948-84a6-04b8aa7eef4c/cluster-olm-operator/0.log" Feb 19 03:12:49.127429 master-0 kubenswrapper[7776]: I0219 03:12:49.127357 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.127811 master-0 kubenswrapper[7776]: I0219 03:12:49.127753 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhg6\" (UniqueName: \"kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.127874 master-0 kubenswrapper[7776]: I0219 03:12:49.127831 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.127923 master-0 kubenswrapper[7776]: I0219 03:12:49.127902 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.127969 master-0 kubenswrapper[7776]: I0219 03:12:49.127956 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.128201 master-0 kubenswrapper[7776]: I0219 03:12:49.128155 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.129232 master-0 kubenswrapper[7776]: I0219 03:12:49.129188 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.130034 master-0 kubenswrapper[7776]: I0219 03:12:49.130008 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.133332 master-0 kubenswrapper[7776]: I0219 03:12:49.131979 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.145219 master-0 kubenswrapper[7776]: I0219 03:12:49.145174 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhg6\" (UniqueName: \"kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.229332 master-0 kubenswrapper[7776]: I0219 03:12:49.229235 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:49.229550 master-0 kubenswrapper[7776]: E0219 03:12:49.229413 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:49.229550 master-0 kubenswrapper[7776]: E0219 03:12:49.229504 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:12:57.229479791 +0000 UTC m=+483.569164329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:49.274991 master-0 kubenswrapper[7776]: I0219 03:12:49.274924 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-5bd7768f54-f8dfs_1f9e07d3-d157-4948-84a6-04b8aa7eef4c/copy-catalogd-manifests/0.log" Feb 19 03:12:49.278289 master-0 kubenswrapper[7776]: I0219 03:12:49.278195 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:12:49.292772 master-0 kubenswrapper[7776]: W0219 03:12:49.292695 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf2be4f9_f632_4a72_8f39_c96954403edc.slice/crio-ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b WatchSource:0}: Error finding container ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b: Status 404 returned error can't find the container with id ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b Feb 19 03:12:49.474418 master-0 kubenswrapper[7776]: I0219 03:12:49.474365 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-5bd7768f54-f8dfs_1f9e07d3-d157-4948-84a6-04b8aa7eef4c/copy-operator-controller-manifests/0.log" Feb 19 03:12:49.675575 master-0 kubenswrapper[7776]: I0219 03:12:49.675506 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-5bd7768f54-f8dfs_1f9e07d3-d157-4948-84a6-04b8aa7eef4c/cluster-olm-operator/1.log" Feb 19 03:12:49.763340 master-0 kubenswrapper[7776]: I0219 03:12:49.763275 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:49.763340 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:49.763340 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:49.763340 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:49.763340 master-0 kubenswrapper[7776]: I0219 03:12:49.763338 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:49.871362 master-0 kubenswrapper[7776]: I0219 03:12:49.871318 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a6892f-5a69-434b-9dea-11ad5de62a40" path="/var/lib/kubelet/pods/72a6892f-5a69-434b-9dea-11ad5de62a40/volumes" Feb 19 03:12:49.872048 master-0 kubenswrapper[7776]: I0219 03:12:49.872020 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"c9a8948e6182f0cdb976b661c449d741ee645d844809a7695d74084a213ff139"} Feb 19 03:12:49.872095 master-0 kubenswrapper[7776]: I0219 03:12:49.872051 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"e91ffe706d1ad6df0dfe02b5098676d02a6c7e690163f70c0b4d651c88fb78ce"} Feb 19 03:12:49.872095 master-0 kubenswrapper[7776]: I0219 03:12:49.872067 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b"} Feb 19 03:12:49.879243 master-0 kubenswrapper[7776]: I0219 03:12:49.877774 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:12:50.076544 master-0 kubenswrapper[7776]: I0219 03:12:50.076447 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/2.log" Feb 19 03:12:50.275118 master-0 kubenswrapper[7776]: I0219 03:12:50.275015 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/fix-audit-permissions/0.log" Feb 19 03:12:50.479192 master-0 kubenswrapper[7776]: I0219 03:12:50.479142 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/openshift-apiserver/0.log" Feb 19 03:12:50.677107 master-0 kubenswrapper[7776]: I0219 03:12:50.677066 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/openshift-apiserver-check-endpoints/0.log" Feb 19 03:12:50.764106 master-0 kubenswrapper[7776]: I0219 03:12:50.764024 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:50.764106 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:50.764106 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:50.764106 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:50.764417 master-0 kubenswrapper[7776]: I0219 03:12:50.764117 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:50.876981 master-0 kubenswrapper[7776]: I0219 03:12:50.876929 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:12:50.879479 master-0 kubenswrapper[7776]: I0219 03:12:50.879442 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/0.log" Feb 19 03:12:50.880178 master-0 kubenswrapper[7776]: I0219 03:12:50.880130 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="b878204c53827b67b29a4b21cc54e60caeaac13641a9fee6708dd00ed3cf8205" exitCode=1 Feb 19 03:12:50.880178 master-0 kubenswrapper[7776]: I0219 03:12:50.880171 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"b878204c53827b67b29a4b21cc54e60caeaac13641a9fee6708dd00ed3cf8205"} Feb 19 03:12:50.880736 master-0 kubenswrapper[7776]: I0219 03:12:50.880679 7776 scope.go:117] "RemoveContainer" containerID="b878204c53827b67b29a4b21cc54e60caeaac13641a9fee6708dd00ed3cf8205" Feb 19 03:12:51.075917 master-0 kubenswrapper[7776]: I0219 03:12:51.075879 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/4.log" Feb 19 03:12:51.276004 master-0 kubenswrapper[7776]: I0219 03:12:51.275872 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:12:51.477544 master-0 kubenswrapper[7776]: I0219 03:12:51.477481 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/4.log" Feb 19 03:12:51.679916 master-0 kubenswrapper[7776]: I0219 03:12:51.679860 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7d4cccb57c-sfb9j_92b9ea7b-01b1-48f8-a392-12200f55502e/controller-manager/0.log" Feb 19 03:12:51.763155 master-0 kubenswrapper[7776]: I0219 03:12:51.763088 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:51.763155 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:51.763155 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:51.763155 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:51.763491 master-0 kubenswrapper[7776]: I0219 03:12:51.763174 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:51.880285 master-0 kubenswrapper[7776]: I0219 03:12:51.880216 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84d87bdd5b-7p6kp_ac7a5635-30b4-4076-babb-db1abd26da88/route-controller-manager/0.log" Feb 19 03:12:51.887713 master-0 kubenswrapper[7776]: I0219 03:12:51.887682 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/1.log" Feb 19 03:12:51.888271 master-0 kubenswrapper[7776]: I0219 03:12:51.888231 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/0.log" Feb 19 03:12:51.888953 master-0 kubenswrapper[7776]: I0219 03:12:51.888917 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866" exitCode=1 Feb 19 03:12:51.889017 master-0 kubenswrapper[7776]: I0219 03:12:51.888963 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866"} Feb 19 03:12:51.889017 master-0 kubenswrapper[7776]: I0219 03:12:51.889005 7776 scope.go:117] "RemoveContainer" containerID="b878204c53827b67b29a4b21cc54e60caeaac13641a9fee6708dd00ed3cf8205" Feb 19 03:12:51.889700 master-0 kubenswrapper[7776]: I0219 03:12:51.889668 7776 scope.go:117] "RemoveContainer" containerID="66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866" Feb 19 03:12:51.890363 master-0 kubenswrapper[7776]: E0219 03:12:51.889895 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:12:52.079592 master-0 kubenswrapper[7776]: I0219 03:12:52.079531 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84d87bdd5b-7p6kp_ac7a5635-30b4-4076-babb-db1abd26da88/route-controller-manager/1.log" Feb 19 03:12:52.279593 master-0 kubenswrapper[7776]: I0219 03:12:52.279406 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-sbzsk_c50a2aec-7ed0-4114-8b25-19579fe931cb/catalog-operator/0.log" Feb 19 03:12:52.471693 master-0 kubenswrapper[7776]: I0219 03:12:52.471615 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:12:52.471979 master-0 kubenswrapper[7776]: E0219 03:12:52.471840 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:12:52.471979 master-0 kubenswrapper[7776]: E0219 03:12:52.471963 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:13:00.471930708 +0000 UTC m=+486.811615276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:12:52.481617 master-0 kubenswrapper[7776]: I0219 03:12:52.481518 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-kk77t_b283bd8e-3339-4701-ae3c-f009e498b7d4/olm-operator/0.log" Feb 19 03:12:52.762866 master-0 kubenswrapper[7776]: I0219 03:12:52.762818 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:52.762866 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:52.762866 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:52.762866 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:52.763148 master-0 kubenswrapper[7776]: I0219 03:12:52.762881 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:52.883110 master-0 kubenswrapper[7776]: I0219 03:12:52.883028 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/0.log" Feb 19 03:12:52.900380 master-0 kubenswrapper[7776]: I0219 03:12:52.900244 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/1.log" Feb 19 03:12:52.902926 master-0 kubenswrapper[7776]: I0219 03:12:52.902859 7776 scope.go:117] "RemoveContainer" containerID="66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866" Feb 19 03:12:52.903317 master-0 kubenswrapper[7776]: E0219 03:12:52.903210 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:12:53.076952 master-0 kubenswrapper[7776]: I0219 03:12:53.076782 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/kube-rbac-proxy/0.log" Feb 19 03:12:53.276346 master-0 kubenswrapper[7776]: I0219 03:12:53.276273 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:12:53.483142 master-0 kubenswrapper[7776]: I0219 03:12:53.483087 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7d77f88776-s4jxm_2576028c-40d8-4ef4-ba41-a5aff01f2ed3/packageserver/0.log" Feb 19 03:12:53.764701 master-0 kubenswrapper[7776]: I0219 03:12:53.764539 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:53.764701 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:53.764701 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:53.764701 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:53.764701 master-0 kubenswrapper[7776]: I0219 03:12:53.764635 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:54.763948 master-0 kubenswrapper[7776]: I0219 03:12:54.763853 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:54.763948 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:54.763948 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:54.763948 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:54.764697 master-0 kubenswrapper[7776]: I0219 03:12:54.763953 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:55.763518 master-0 kubenswrapper[7776]: I0219 03:12:55.763473 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:55.763518 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:55.763518 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:55.763518 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:55.763817 master-0 kubenswrapper[7776]: I0219 03:12:55.763538 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:56.765096 master-0 kubenswrapper[7776]: I0219 03:12:56.765009 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:56.765096 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:56.765096 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:56.765096 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:56.765921 master-0 kubenswrapper[7776]: I0219 03:12:56.765113 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:57.241170 master-0 kubenswrapper[7776]: I0219 03:12:57.241071 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:12:57.241482 master-0 kubenswrapper[7776]: E0219 03:12:57.241275 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:12:57.241482 master-0 kubenswrapper[7776]: E0219 03:12:57.241369 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:13:13.24134542 +0000 UTC m=+499.581030038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:12:57.764558 master-0 kubenswrapper[7776]: I0219 03:12:57.764439 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:57.764558 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:57.764558 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:57.764558 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:57.764970 master-0 kubenswrapper[7776]: I0219 03:12:57.764561 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:58.764064 master-0 kubenswrapper[7776]: I0219 03:12:58.763990 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:58.764064 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:58.764064 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:58.764064 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:58.764688 master-0 kubenswrapper[7776]: I0219 03:12:58.764068 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:12:59.764371 master-0 kubenswrapper[7776]: I0219 03:12:59.764240 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:12:59.764371 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:12:59.764371 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:12:59.764371 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:12:59.765136 master-0 kubenswrapper[7776]: I0219 03:12:59.764397 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:00.488297 master-0 kubenswrapper[7776]: I0219 03:13:00.488217 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:13:00.488541 master-0 kubenswrapper[7776]: E0219 03:13:00.488389 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:13:00.488541 master-0 kubenswrapper[7776]: E0219 03:13:00.488449 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:13:16.488432684 +0000 UTC m=+502.828117202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:13:00.764359 master-0 kubenswrapper[7776]: I0219 03:13:00.764154 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:00.764359 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:00.764359 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:00.764359 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:00.764359 master-0 kubenswrapper[7776]: I0219 03:13:00.764297 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:01.764472 master-0 kubenswrapper[7776]: I0219 03:13:01.764404 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:01.764472 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:01.764472 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:01.764472 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:01.767494 master-0 kubenswrapper[7776]: I0219 03:13:01.764497 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:02.763481 master-0 kubenswrapper[7776]: I0219 03:13:02.763382 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:02.763481 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:02.763481 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:02.763481 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:02.763481 master-0 kubenswrapper[7776]: I0219 03:13:02.763477 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:03.763639 master-0 kubenswrapper[7776]: I0219 03:13:03.763549 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:03.763639 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:03.763639 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:03.763639 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:03.764691 master-0 kubenswrapper[7776]: I0219 03:13:03.763659 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:03.846888 master-0 kubenswrapper[7776]: I0219 03:13:03.845867 7776 scope.go:117] "RemoveContainer" containerID="66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866" Feb 19 03:13:03.980761 master-0 kubenswrapper[7776]: I0219 03:13:03.980613 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/1.log" Feb 19 03:13:03.983392 master-0 kubenswrapper[7776]: I0219 03:13:03.983352 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/0.log" Feb 19 03:13:03.983486 master-0 kubenswrapper[7776]: I0219 03:13:03.983425 7776 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1" exitCode=1 Feb 19 03:13:03.983486 master-0 kubenswrapper[7776]: I0219 03:13:03.983461 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerDied","Data":"4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1"} Feb 19 03:13:03.983554 master-0 kubenswrapper[7776]: I0219 03:13:03.983504 7776 scope.go:117] "RemoveContainer" containerID="48896fb51d13a46ede8e9679a55d5198adfa5eeb4a91ae305507c9b4bf39a65b" Feb 19 03:13:03.984654 master-0 kubenswrapper[7776]: I0219 03:13:03.984613 7776 scope.go:117] "RemoveContainer" containerID="4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1" Feb 19 03:13:03.984910 master-0 kubenswrapper[7776]: E0219 03:13:03.984836 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:13:04.767284 master-0 kubenswrapper[7776]: I0219 03:13:04.765853 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:04.767284 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:04.767284 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:04.767284 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:04.767284 master-0 kubenswrapper[7776]: I0219 03:13:04.766134 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:04.991855 master-0 kubenswrapper[7776]: I0219 03:13:04.991782 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/2.log" Feb 19 03:13:04.992485 master-0 kubenswrapper[7776]: I0219 03:13:04.992439 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/1.log" Feb 19 03:13:04.993321 master-0 kubenswrapper[7776]: I0219 03:13:04.993210 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b" exitCode=1 Feb 19 03:13:04.993482 master-0 kubenswrapper[7776]: I0219 03:13:04.993403 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b"} Feb 19 03:13:04.993591 master-0 kubenswrapper[7776]: I0219 03:13:04.993481 7776 scope.go:117] "RemoveContainer" containerID="66c8d2880eef13edec00b80aaa01d4ffe66ef624522efc9464afe289ad138866" Feb 19 03:13:04.994630 master-0 kubenswrapper[7776]: I0219 03:13:04.994570 7776 scope.go:117] "RemoveContainer" containerID="e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b" Feb 19 03:13:04.995043 master-0 kubenswrapper[7776]: E0219 03:13:04.994983 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:13:04.996689 master-0 kubenswrapper[7776]: I0219 03:13:04.996347 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/1.log" Feb 19 03:13:05.764210 master-0 kubenswrapper[7776]: I0219 03:13:05.764141 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:05.764210 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:05.764210 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:05.764210 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:05.764923 master-0 kubenswrapper[7776]: I0219 03:13:05.764866 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:06.005569 master-0 kubenswrapper[7776]: I0219 03:13:06.005477 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/2.log" Feb 19 03:13:06.765121 master-0 kubenswrapper[7776]: I0219 03:13:06.765033 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:06.765121 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:06.765121 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:06.765121 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:06.765782 master-0 kubenswrapper[7776]: I0219 03:13:06.765155 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:07.764166 master-0 kubenswrapper[7776]: I0219 03:13:07.764011 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:07.764166 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:07.764166 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:07.764166 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:07.764166 master-0 kubenswrapper[7776]: I0219 03:13:07.764130 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:08.763005 master-0 kubenswrapper[7776]: I0219 03:13:08.762915 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:08.763005 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:08.763005 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:08.763005 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:08.763449 master-0 kubenswrapper[7776]: I0219 03:13:08.763013 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:09.764011 master-0 kubenswrapper[7776]: I0219 03:13:09.763907 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:09.764011 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:09.764011 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:09.764011 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:09.764011 master-0 kubenswrapper[7776]: I0219 03:13:09.763997 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:10.763610 master-0 kubenswrapper[7776]: I0219 03:13:10.763503 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:10.763610 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:10.763610 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:10.763610 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:10.763928 master-0 kubenswrapper[7776]: I0219 03:13:10.763625 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:11.764314 master-0 kubenswrapper[7776]: I0219 03:13:11.764179 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:11.764314 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:11.764314 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:11.764314 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:11.765356 master-0 kubenswrapper[7776]: I0219 03:13:11.764324 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:12.764213 master-0 kubenswrapper[7776]: I0219 03:13:12.764150 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:12.764213 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:12.764213 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:12.764213 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:12.765104 master-0 kubenswrapper[7776]: I0219 03:13:12.765068 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:13.283564 master-0 kubenswrapper[7776]: I0219 03:13:13.283506 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:13:13.283796 master-0 kubenswrapper[7776]: E0219 03:13:13.283737 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:13:13.283871 master-0 kubenswrapper[7776]: E0219 03:13:13.283848 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:13:45.283826231 +0000 UTC m=+531.623510749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:13:13.763503 master-0 kubenswrapper[7776]: I0219 03:13:13.763390 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:13.763503 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:13.763503 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:13.763503 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:13.763503 master-0 kubenswrapper[7776]: I0219 03:13:13.763465 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:14.764213 master-0 kubenswrapper[7776]: I0219 03:13:14.764126 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:14.764213 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:14.764213 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:14.764213 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:14.765516 master-0 kubenswrapper[7776]: I0219 03:13:14.764221 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:15.764330 master-0 kubenswrapper[7776]: I0219 03:13:15.764222 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:15.764330 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:15.764330 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:15.764330 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:15.765105 master-0 kubenswrapper[7776]: I0219 03:13:15.764376 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:16.564980 master-0 kubenswrapper[7776]: I0219 03:13:16.564871 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:13:16.565338 master-0 kubenswrapper[7776]: E0219 03:13:16.565130 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:13:16.565338 master-0 kubenswrapper[7776]: E0219 03:13:16.565249 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:13:48.565226237 +0000 UTC m=+534.904910765 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:13:16.764624 master-0 kubenswrapper[7776]: I0219 03:13:16.764504 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:16.764624 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:16.764624 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:16.764624 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:16.764624 master-0 kubenswrapper[7776]: I0219 03:13:16.764603 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:17.764968 master-0 kubenswrapper[7776]: I0219 03:13:17.764815 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:17.764968 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:17.764968 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:17.764968 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:17.766053 master-0 kubenswrapper[7776]: I0219 03:13:17.764980 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:17.843463 master-0 kubenswrapper[7776]: I0219 03:13:17.843383 7776 scope.go:117] "RemoveContainer" containerID="4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1" Feb 19 03:13:18.089935 master-0 kubenswrapper[7776]: I0219 03:13:18.089818 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/1.log" Feb 19 03:13:18.090322 master-0 kubenswrapper[7776]: I0219 03:13:18.090287 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4"} Feb 19 03:13:18.764442 master-0 kubenswrapper[7776]: I0219 03:13:18.764390 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:18.764442 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:18.764442 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:18.764442 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:18.764704 master-0 kubenswrapper[7776]: I0219 03:13:18.764476 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:18.843032 master-0 kubenswrapper[7776]: I0219 03:13:18.842980 7776 scope.go:117] "RemoveContainer" containerID="e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b" Feb 19 03:13:18.843610 master-0 kubenswrapper[7776]: E0219 03:13:18.843181 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:13:19.767749 master-0 kubenswrapper[7776]: I0219 03:13:19.767678 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:19.767749 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:19.767749 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:19.767749 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:19.768082 master-0 kubenswrapper[7776]: I0219 03:13:19.767776 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:20.763418 master-0 kubenswrapper[7776]: I0219 03:13:20.763355 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:20.763418 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:20.763418 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:20.763418 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:20.763971 master-0 kubenswrapper[7776]: I0219 03:13:20.763445 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:21.763820 master-0 kubenswrapper[7776]: I0219 03:13:21.763734 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:21.763820 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:21.763820 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:21.763820 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:21.764546 master-0 kubenswrapper[7776]: I0219 03:13:21.763815 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:22.764872 master-0 kubenswrapper[7776]: I0219 03:13:22.764805 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:22.764872 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:22.764872 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:22.764872 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:22.765604 master-0 kubenswrapper[7776]: I0219 03:13:22.764889 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:23.764415 master-0 kubenswrapper[7776]: I0219 03:13:23.764313 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:23.764415 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:23.764415 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:23.764415 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:23.764415 master-0 kubenswrapper[7776]: I0219 03:13:23.764410 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:24.763893 master-0 kubenswrapper[7776]: I0219 03:13:24.763783 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:24.763893 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:24.763893 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:24.763893 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:24.763893 master-0 kubenswrapper[7776]: I0219 03:13:24.763886 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:25.767863 master-0 kubenswrapper[7776]: I0219 03:13:25.767755 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:25.767863 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:25.767863 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:25.767863 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:25.768565 master-0 kubenswrapper[7776]: I0219 03:13:25.767877 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:26.764312 master-0 kubenswrapper[7776]: I0219 03:13:26.764218 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:26.764312 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:26.764312 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:26.764312 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:26.764634 master-0 kubenswrapper[7776]: I0219 03:13:26.764327 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:27.765554 master-0 kubenswrapper[7776]: I0219 03:13:27.765449 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:27.765554 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:27.765554 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:27.765554 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:27.766576 master-0 kubenswrapper[7776]: I0219 03:13:27.765571 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:28.763118 master-0 kubenswrapper[7776]: I0219 03:13:28.763035 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:28.763118 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:28.763118 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:28.763118 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:28.763118 master-0 kubenswrapper[7776]: I0219 03:13:28.763117 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:29.763554 master-0 kubenswrapper[7776]: I0219 03:13:29.763475 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:29.763554 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:29.763554 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:29.763554 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:29.763554 master-0 kubenswrapper[7776]: I0219 03:13:29.763546 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:29.843440 master-0 kubenswrapper[7776]: I0219 03:13:29.843331 7776 scope.go:117] "RemoveContainer" containerID="e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b" Feb 19 03:13:30.176471 master-0 kubenswrapper[7776]: I0219 03:13:30.176381 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/2.log" Feb 19 03:13:30.177550 master-0 kubenswrapper[7776]: I0219 03:13:30.177475 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e"} Feb 19 03:13:30.293378 master-0 kubenswrapper[7776]: I0219 03:13:30.293116 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podStartSLOduration=42.293090359 podStartE2EDuration="42.293090359s" podCreationTimestamp="2026-02-19 03:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:13:30.287056091 +0000 UTC m=+516.626740619" watchObservedRunningTime="2026-02-19 03:13:30.293090359 +0000 UTC m=+516.632774907" Feb 19 03:13:30.763982 master-0 kubenswrapper[7776]: I0219 03:13:30.763915 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:30.763982 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:30.763982 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:30.763982 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:30.765058 master-0 kubenswrapper[7776]: I0219 03:13:30.764598 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:31.187297 master-0 kubenswrapper[7776]: I0219 03:13:31.187172 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/3.log" Feb 19 03:13:31.188055 master-0 kubenswrapper[7776]: I0219 03:13:31.188000 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/2.log" Feb 19 03:13:31.189323 master-0 kubenswrapper[7776]: I0219 03:13:31.189279 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" exitCode=1 Feb 19 03:13:31.189420 master-0 kubenswrapper[7776]: I0219 03:13:31.189339 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e"} Feb 19 03:13:31.189509 master-0 kubenswrapper[7776]: I0219 03:13:31.189421 7776 scope.go:117] "RemoveContainer" containerID="e0e7fea08ef1f5f68cf832bc929177486fa0dce09c473818d711248fc091084b" Feb 19 03:13:31.190141 master-0 kubenswrapper[7776]: I0219 03:13:31.190097 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:13:31.190357 master-0 kubenswrapper[7776]: E0219 03:13:31.190317 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:13:31.764167 master-0 kubenswrapper[7776]: I0219 03:13:31.764084 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:31.764167 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:31.764167 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:31.764167 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:31.764167 master-0 kubenswrapper[7776]: I0219 03:13:31.764170 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:32.200589 master-0 kubenswrapper[7776]: I0219 03:13:32.200463 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/3.log" Feb 19 03:13:32.764223 master-0 kubenswrapper[7776]: I0219 03:13:32.764143 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:32.764223 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:32.764223 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:32.764223 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:32.764977 master-0 kubenswrapper[7776]: I0219 03:13:32.764227 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:33.763873 master-0 kubenswrapper[7776]: I0219 03:13:33.763768 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:33.763873 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:33.763873 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:33.763873 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:33.765206 master-0 kubenswrapper[7776]: I0219 03:13:33.763888 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:34.764919 master-0 kubenswrapper[7776]: I0219 03:13:34.764840 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:34.764919 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:34.764919 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:34.764919 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:34.765916 master-0 kubenswrapper[7776]: I0219 03:13:34.764940 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:35.764301 master-0 kubenswrapper[7776]: I0219 03:13:35.764206 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:35.764301 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:35.764301 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:35.764301 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:35.764795 master-0 kubenswrapper[7776]: I0219 03:13:35.764354 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:36.764291 master-0 kubenswrapper[7776]: I0219 03:13:36.764188 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:36.764291 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:36.764291 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:36.764291 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:36.765251 master-0 kubenswrapper[7776]: I0219 03:13:36.764325 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:37.764808 master-0 kubenswrapper[7776]: I0219 03:13:37.764625 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:37.764808 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:37.764808 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:37.764808 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:37.764808 master-0 kubenswrapper[7776]: I0219 03:13:37.764703 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:38.763423 master-0 kubenswrapper[7776]: I0219 03:13:38.763357 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:38.763423 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:38.763423 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:38.763423 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:38.763790 master-0 kubenswrapper[7776]: I0219 03:13:38.763424 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:39.764157 master-0 kubenswrapper[7776]: I0219 03:13:39.764099 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:39.764157 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:39.764157 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:39.764157 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:39.764854 master-0 kubenswrapper[7776]: I0219 03:13:39.764179 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:40.053764 master-0 kubenswrapper[7776]: E0219 03:13:40.053620 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" podUID="33bb562f-84e7-4fcb-b008-416c09a5ecf0" Feb 19 03:13:40.179631 master-0 kubenswrapper[7776]: E0219 03:13:40.179549 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" podUID="59cea4cb-6374-49b6-97b3-d8a19cc1860f" Feb 19 03:13:40.201076 master-0 kubenswrapper[7776]: E0219 03:13:40.200997 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" podUID="858a717b-a44e-4b8d-9974-7451a89cf104" Feb 19 03:13:40.255943 master-0 kubenswrapper[7776]: I0219 03:13:40.255896 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:13:40.256535 master-0 kubenswrapper[7776]: I0219 03:13:40.255968 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:13:40.256697 master-0 kubenswrapper[7776]: I0219 03:13:40.255983 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:13:40.764056 master-0 kubenswrapper[7776]: I0219 03:13:40.763987 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:40.764056 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:40.764056 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:40.764056 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:40.765181 master-0 kubenswrapper[7776]: I0219 03:13:40.764083 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:41.765134 master-0 kubenswrapper[7776]: I0219 03:13:41.765033 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:41.765134 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:41.765134 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:41.765134 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:41.766211 master-0 kubenswrapper[7776]: I0219 03:13:41.765164 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:41.780378 master-0 kubenswrapper[7776]: E0219 03:13:41.780244 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" podUID="255784ad-b52a-4c5c-ad15-278865ee2ccb" Feb 19 03:13:42.268424 master-0 kubenswrapper[7776]: I0219 03:13:42.268356 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:13:42.764580 master-0 kubenswrapper[7776]: I0219 03:13:42.764458 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:42.764580 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:42.764580 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:42.764580 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:42.764580 master-0 kubenswrapper[7776]: I0219 03:13:42.764555 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:43.763604 master-0 kubenswrapper[7776]: I0219 03:13:43.763524 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:43.763604 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:43.763604 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:43.763604 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:43.764512 master-0 kubenswrapper[7776]: I0219 03:13:43.763605 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:44.764316 master-0 kubenswrapper[7776]: I0219 03:13:44.764219 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:44.764316 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:44.764316 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:44.764316 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:44.764316 master-0 kubenswrapper[7776]: I0219 03:13:44.764312 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:44.842894 master-0 kubenswrapper[7776]: I0219 03:13:44.842787 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:13:44.843293 master-0 kubenswrapper[7776]: E0219 03:13:44.843203 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:13:45.023981 master-0 kubenswrapper[7776]: I0219 03:13:45.023825 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:13:45.024169 master-0 kubenswrapper[7776]: E0219 03:13:45.024065 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:13:45.024169 master-0 kubenswrapper[7776]: E0219 03:13:45.024137 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:15:47.024116288 +0000 UTC m=+653.363800816 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:13:45.124878 master-0 kubenswrapper[7776]: I0219 03:13:45.124807 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:13:45.125113 master-0 kubenswrapper[7776]: I0219 03:13:45.124944 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:13:45.125113 master-0 kubenswrapper[7776]: E0219 03:13:45.124976 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:13:45.125113 master-0 kubenswrapper[7776]: E0219 03:13:45.125057 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:15:47.125038894 +0000 UTC m=+653.464723412 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:13:45.125494 master-0 kubenswrapper[7776]: E0219 03:13:45.125433 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:13:45.125772 master-0 kubenswrapper[7776]: E0219 03:13:45.125733 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:15:47.125659562 +0000 UTC m=+653.465344120 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:13:45.328519 master-0 kubenswrapper[7776]: I0219 03:13:45.327943 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:13:45.329060 master-0 kubenswrapper[7776]: E0219 03:13:45.328159 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:13:45.329060 master-0 kubenswrapper[7776]: E0219 03:13:45.328624 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:14:49.328602636 +0000 UTC m=+595.668287154 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:13:45.764227 master-0 kubenswrapper[7776]: I0219 03:13:45.764105 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:45.764227 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:45.764227 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:45.764227 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:45.764227 master-0 kubenswrapper[7776]: I0219 03:13:45.764202 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:46.753547 master-0 kubenswrapper[7776]: I0219 03:13:46.753459 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:13:46.753851 master-0 kubenswrapper[7776]: E0219 03:13:46.753708 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:13:46.753851 master-0 kubenswrapper[7776]: E0219 03:13:46.753811 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:15:48.753786029 +0000 UTC m=+655.093470557 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:13:46.764456 master-0 kubenswrapper[7776]: I0219 03:13:46.764355 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:46.764456 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:46.764456 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:46.764456 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:46.764456 master-0 kubenswrapper[7776]: I0219 03:13:46.764410 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:47.763691 master-0 kubenswrapper[7776]: I0219 03:13:47.763600 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:47.763691 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:47.763691 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:47.763691 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:47.764039 master-0 kubenswrapper[7776]: I0219 03:13:47.763718 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:48.584821 master-0 kubenswrapper[7776]: I0219 03:13:48.584725 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:13:48.585869 master-0 kubenswrapper[7776]: E0219 03:13:48.585066 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:13:48.585869 master-0 kubenswrapper[7776]: E0219 03:13:48.585197 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:14:52.585162571 +0000 UTC m=+598.924847129 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:13:48.764272 master-0 kubenswrapper[7776]: I0219 03:13:48.764190 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:48.764272 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:48.764272 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:48.764272 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:48.764272 master-0 kubenswrapper[7776]: I0219 03:13:48.764269 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:49.764211 master-0 kubenswrapper[7776]: I0219 03:13:49.764149 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:49.764211 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:49.764211 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:49.764211 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:49.764798 master-0 kubenswrapper[7776]: I0219 03:13:49.764219 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:50.763913 master-0 kubenswrapper[7776]: I0219 03:13:50.763848 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:50.763913 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:50.763913 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:50.763913 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:50.764571 master-0 kubenswrapper[7776]: I0219 03:13:50.763918 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:51.764227 master-0 kubenswrapper[7776]: I0219 03:13:51.764133 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:51.764227 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:51.764227 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:51.764227 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:51.764227 master-0 kubenswrapper[7776]: I0219 03:13:51.764227 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:52.764621 master-0 kubenswrapper[7776]: I0219 03:13:52.764550 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:52.764621 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:52.764621 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:52.764621 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:52.765662 master-0 kubenswrapper[7776]: I0219 03:13:52.765463 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:53.763832 master-0 kubenswrapper[7776]: I0219 03:13:53.763766 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:53.763832 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:53.763832 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:53.763832 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:53.763832 master-0 kubenswrapper[7776]: I0219 03:13:53.763831 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:54.764112 master-0 kubenswrapper[7776]: I0219 03:13:54.764050 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:54.764112 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:54.764112 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:54.764112 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:54.764696 master-0 kubenswrapper[7776]: I0219 03:13:54.764114 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:55.769931 master-0 kubenswrapper[7776]: I0219 03:13:55.769858 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:55.769931 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:55.769931 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:55.769931 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:55.769931 master-0 kubenswrapper[7776]: I0219 03:13:55.769919 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:55.843226 master-0 kubenswrapper[7776]: I0219 03:13:55.843112 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:13:55.843725 master-0 kubenswrapper[7776]: E0219 03:13:55.843562 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:13:56.765314 master-0 kubenswrapper[7776]: I0219 03:13:56.765198 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:56.765314 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:56.765314 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:56.765314 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:56.765961 master-0 kubenswrapper[7776]: I0219 03:13:56.765328 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:57.764490 master-0 kubenswrapper[7776]: I0219 03:13:57.764418 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:57.764490 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:57.764490 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:57.764490 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:57.765596 master-0 kubenswrapper[7776]: I0219 03:13:57.765487 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:58.762838 master-0 kubenswrapper[7776]: I0219 03:13:58.762771 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:58.762838 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:58.762838 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:58.762838 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:58.763173 master-0 kubenswrapper[7776]: I0219 03:13:58.762871 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:13:59.764066 master-0 kubenswrapper[7776]: I0219 03:13:59.763966 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:13:59.764066 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:13:59.764066 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:13:59.764066 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:13:59.764066 master-0 kubenswrapper[7776]: I0219 03:13:59.764057 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:00.764096 master-0 kubenswrapper[7776]: I0219 03:14:00.764030 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:00.764096 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:00.764096 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:00.764096 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:00.764972 master-0 kubenswrapper[7776]: I0219 03:14:00.764128 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:01.764909 master-0 kubenswrapper[7776]: I0219 03:14:01.764830 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:01.764909 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:01.764909 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:01.764909 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:01.766083 master-0 kubenswrapper[7776]: I0219 03:14:01.764930 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:02.764571 master-0 kubenswrapper[7776]: I0219 03:14:02.764450 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:02.764571 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:02.764571 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:02.764571 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:02.765759 master-0 kubenswrapper[7776]: I0219 03:14:02.764584 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:03.765287 master-0 kubenswrapper[7776]: I0219 03:14:03.765188 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:03.765287 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:03.765287 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:03.765287 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:03.765287 master-0 kubenswrapper[7776]: I0219 03:14:03.765285 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:04.764422 master-0 kubenswrapper[7776]: I0219 03:14:04.764290 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:04.764422 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:04.764422 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:04.764422 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:04.765332 master-0 kubenswrapper[7776]: I0219 03:14:04.764443 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:05.764558 master-0 kubenswrapper[7776]: I0219 03:14:05.764457 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:05.764558 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:05.764558 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:05.764558 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:05.764558 master-0 kubenswrapper[7776]: I0219 03:14:05.764543 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:06.763798 master-0 kubenswrapper[7776]: I0219 03:14:06.763688 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:06.763798 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:06.763798 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:06.763798 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:06.763798 master-0 kubenswrapper[7776]: I0219 03:14:06.763785 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:06.843406 master-0 kubenswrapper[7776]: I0219 03:14:06.843240 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:14:06.843725 master-0 kubenswrapper[7776]: E0219 03:14:06.843666 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:14:07.764300 master-0 kubenswrapper[7776]: I0219 03:14:07.764153 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:07.764300 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:07.764300 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:07.764300 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:07.764300 master-0 kubenswrapper[7776]: I0219 03:14:07.764291 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:08.764076 master-0 kubenswrapper[7776]: I0219 03:14:08.764006 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:08.764076 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:08.764076 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:08.764076 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:08.764076 master-0 kubenswrapper[7776]: I0219 03:14:08.764074 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:09.763289 master-0 kubenswrapper[7776]: I0219 03:14:09.763204 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:09.763289 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:09.763289 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:09.763289 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:09.763682 master-0 kubenswrapper[7776]: I0219 03:14:09.763299 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:10.763682 master-0 kubenswrapper[7776]: I0219 03:14:10.763567 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:10.763682 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:10.763682 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:10.763682 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:10.763682 master-0 kubenswrapper[7776]: I0219 03:14:10.763675 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:11.763777 master-0 kubenswrapper[7776]: I0219 03:14:11.763667 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:11.763777 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:11.763777 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:11.763777 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:11.764973 master-0 kubenswrapper[7776]: I0219 03:14:11.763787 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:12.763918 master-0 kubenswrapper[7776]: I0219 03:14:12.763803 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:12.763918 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:12.763918 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:12.763918 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:12.763918 master-0 kubenswrapper[7776]: I0219 03:14:12.763903 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:13.764295 master-0 kubenswrapper[7776]: I0219 03:14:13.764148 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:13.764295 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:13.764295 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:13.764295 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:13.765829 master-0 kubenswrapper[7776]: I0219 03:14:13.764295 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:14.764192 master-0 kubenswrapper[7776]: I0219 03:14:14.764095 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:14.764192 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:14.764192 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:14.764192 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:14.764824 master-0 kubenswrapper[7776]: I0219 03:14:14.764221 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:15.765000 master-0 kubenswrapper[7776]: I0219 03:14:15.764902 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:15.765000 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:15.765000 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:15.765000 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:15.765871 master-0 kubenswrapper[7776]: I0219 03:14:15.765038 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:16.768298 master-0 kubenswrapper[7776]: I0219 03:14:16.768084 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:16.768298 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:16.768298 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:16.768298 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:16.768298 master-0 kubenswrapper[7776]: I0219 03:14:16.768216 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:17.763928 master-0 kubenswrapper[7776]: I0219 03:14:17.763863 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:17.763928 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:17.763928 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:17.763928 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:17.764191 master-0 kubenswrapper[7776]: I0219 03:14:17.763957 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:17.843431 master-0 kubenswrapper[7776]: I0219 03:14:17.843332 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:14:18.507631 master-0 kubenswrapper[7776]: I0219 03:14:18.507570 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/4.log" Feb 19 03:14:18.508622 master-0 kubenswrapper[7776]: I0219 03:14:18.508560 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/3.log" Feb 19 03:14:18.509903 master-0 kubenswrapper[7776]: I0219 03:14:18.509825 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" exitCode=1 Feb 19 03:14:18.509903 master-0 kubenswrapper[7776]: I0219 03:14:18.509887 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6"} Feb 19 03:14:18.510186 master-0 kubenswrapper[7776]: I0219 03:14:18.509942 7776 scope.go:117] "RemoveContainer" containerID="dd7689baa5f861f7257ae1362b57579e948c67a0b070c3f9a54450993d72b02e" Feb 19 03:14:18.511120 master-0 kubenswrapper[7776]: I0219 03:14:18.511048 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:14:18.511564 master-0 kubenswrapper[7776]: E0219 03:14:18.511497 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:14:18.763465 master-0 kubenswrapper[7776]: I0219 03:14:18.763366 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:18.763465 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:18.763465 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:18.763465 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:18.763465 master-0 kubenswrapper[7776]: I0219 03:14:18.763422 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:19.520834 master-0 kubenswrapper[7776]: I0219 03:14:19.520784 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/4.log" Feb 19 03:14:19.762881 master-0 kubenswrapper[7776]: I0219 03:14:19.762812 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:19.762881 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:19.762881 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:19.762881 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:19.763394 master-0 kubenswrapper[7776]: I0219 03:14:19.762888 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:20.764520 master-0 kubenswrapper[7776]: I0219 03:14:20.764452 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:20.764520 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:20.764520 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:20.764520 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:20.764520 master-0 kubenswrapper[7776]: I0219 03:14:20.764517 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:21.763999 master-0 kubenswrapper[7776]: I0219 03:14:21.763933 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:21.763999 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:21.763999 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:21.763999 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:21.764316 master-0 kubenswrapper[7776]: I0219 03:14:21.764004 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:22.764140 master-0 kubenswrapper[7776]: I0219 03:14:22.764058 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:22.764140 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:22.764140 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:22.764140 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:22.764983 master-0 kubenswrapper[7776]: I0219 03:14:22.764139 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:23.763380 master-0 kubenswrapper[7776]: I0219 03:14:23.763235 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:23.763380 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:23.763380 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:23.763380 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:23.763380 master-0 kubenswrapper[7776]: I0219 03:14:23.763357 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:24.763507 master-0 kubenswrapper[7776]: I0219 03:14:24.763416 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:24.763507 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:24.763507 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:24.763507 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:24.763507 master-0 kubenswrapper[7776]: I0219 03:14:24.763490 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:25.763352 master-0 kubenswrapper[7776]: I0219 03:14:25.763243 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:25.763352 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:25.763352 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:25.763352 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:25.763352 master-0 kubenswrapper[7776]: I0219 03:14:25.763331 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:26.763721 master-0 kubenswrapper[7776]: I0219 03:14:26.763607 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:26.763721 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:26.763721 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:26.763721 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:26.763721 master-0 kubenswrapper[7776]: I0219 03:14:26.763706 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:27.763477 master-0 kubenswrapper[7776]: I0219 03:14:27.763421 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:27.763477 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:27.763477 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:27.763477 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:27.763774 master-0 kubenswrapper[7776]: I0219 03:14:27.763484 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:28.763163 master-0 kubenswrapper[7776]: I0219 03:14:28.763106 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:28.763163 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:28.763163 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:28.763163 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:28.763470 master-0 kubenswrapper[7776]: I0219 03:14:28.763175 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:29.763539 master-0 kubenswrapper[7776]: I0219 03:14:29.763465 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:29.763539 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:29.763539 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:29.763539 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:29.764178 master-0 kubenswrapper[7776]: I0219 03:14:29.763566 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:30.763526 master-0 kubenswrapper[7776]: I0219 03:14:30.763426 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:30.763526 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:30.763526 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:30.763526 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:30.763526 master-0 kubenswrapper[7776]: I0219 03:14:30.763528 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:31.763396 master-0 kubenswrapper[7776]: I0219 03:14:31.763320 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:31.763396 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:31.763396 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:31.763396 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:31.763396 master-0 kubenswrapper[7776]: I0219 03:14:31.763385 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:32.763778 master-0 kubenswrapper[7776]: I0219 03:14:32.763678 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:32.763778 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:32.763778 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:32.763778 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:32.764407 master-0 kubenswrapper[7776]: I0219 03:14:32.763792 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:33.764102 master-0 kubenswrapper[7776]: I0219 03:14:33.764031 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:33.764102 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:33.764102 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:33.764102 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:33.764102 master-0 kubenswrapper[7776]: I0219 03:14:33.764100 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:33.847498 master-0 kubenswrapper[7776]: I0219 03:14:33.847407 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:14:33.847795 master-0 kubenswrapper[7776]: E0219 03:14:33.847636 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:14:34.764487 master-0 kubenswrapper[7776]: I0219 03:14:34.764424 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:34.764487 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:34.764487 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:34.764487 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:34.765168 master-0 kubenswrapper[7776]: I0219 03:14:34.764498 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:35.764063 master-0 kubenswrapper[7776]: I0219 03:14:35.763988 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:35.764063 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:35.764063 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:35.764063 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:35.764063 master-0 kubenswrapper[7776]: I0219 03:14:35.764059 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:36.764280 master-0 kubenswrapper[7776]: I0219 03:14:36.764180 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:36.764280 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:36.764280 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:36.764280 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:36.764280 master-0 kubenswrapper[7776]: I0219 03:14:36.764275 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:37.763654 master-0 kubenswrapper[7776]: I0219 03:14:37.763600 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:37.763654 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:37.763654 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:37.763654 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:37.763654 master-0 kubenswrapper[7776]: I0219 03:14:37.763669 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:38.762755 master-0 kubenswrapper[7776]: I0219 03:14:38.762577 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:38.762755 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:38.762755 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:38.762755 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:38.762755 master-0 kubenswrapper[7776]: I0219 03:14:38.762642 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:39.763314 master-0 kubenswrapper[7776]: I0219 03:14:39.763233 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:39.763314 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:39.763314 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:39.763314 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:39.763985 master-0 kubenswrapper[7776]: I0219 03:14:39.763313 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:40.763014 master-0 kubenswrapper[7776]: I0219 03:14:40.762967 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:14:40.763014 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:14:40.763014 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:14:40.763014 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:14:40.763332 master-0 kubenswrapper[7776]: I0219 03:14:40.763035 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:14:40.763332 master-0 kubenswrapper[7776]: I0219 03:14:40.763086 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:14:40.763745 master-0 kubenswrapper[7776]: I0219 03:14:40.763717 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d"} pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" containerMessage="Container router failed startup probe, will be restarted" Feb 19 03:14:40.763798 master-0 kubenswrapper[7776]: I0219 03:14:40.763769 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" containerID="cri-o://fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d" gracePeriod=3600 Feb 19 03:14:44.515788 master-0 kubenswrapper[7776]: E0219 03:14:44.515623 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" podUID="e2e81865-21fa-4e35-a870-738c13ac5b70" Feb 19 03:14:44.709032 master-0 kubenswrapper[7776]: I0219 03:14:44.708952 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:14:45.842995 master-0 kubenswrapper[7776]: I0219 03:14:45.842948 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:14:45.843535 master-0 kubenswrapper[7776]: E0219 03:14:45.843174 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:14:47.706431 master-0 kubenswrapper[7776]: E0219 03:14:47.706336 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" podUID="92804daf-1fd0-4008-afff-4f9bc362990b" Feb 19 03:14:49.404146 master-0 kubenswrapper[7776]: I0219 03:14:49.404032 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:14:49.404974 master-0 kubenswrapper[7776]: E0219 03:14:49.404244 7776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 19 03:14:49.404974 master-0 kubenswrapper[7776]: E0219 03:14:49.404362 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:51.404340363 +0000 UTC m=+717.744024871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : secret "prometheus-operator-tls" not found Feb 19 03:14:52.655163 master-0 kubenswrapper[7776]: I0219 03:14:52.655066 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:14:52.656149 master-0 kubenswrapper[7776]: E0219 03:14:52.655283 7776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 19 03:14:52.656149 master-0 kubenswrapper[7776]: E0219 03:14:52.655400 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:16:54.655373873 +0000 UTC m=+720.995058411 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : secret "machine-approver-tls" not found Feb 19 03:15:00.176899 master-0 kubenswrapper[7776]: I0219 03:15:00.176786 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt"] Feb 19 03:15:00.178520 master-0 kubenswrapper[7776]: I0219 03:15:00.178466 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.183022 master-0 kubenswrapper[7776]: I0219 03:15:00.182948 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 03:15:00.185246 master-0 kubenswrapper[7776]: I0219 03:15:00.185216 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-7hhvr" Feb 19 03:15:00.188497 master-0 kubenswrapper[7776]: I0219 03:15:00.188449 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt"] Feb 19 03:15:00.260671 master-0 kubenswrapper[7776]: I0219 03:15:00.260492 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.260671 master-0 kubenswrapper[7776]: I0219 03:15:00.260617 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkthn\" (UniqueName: \"kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.260671 master-0 kubenswrapper[7776]: I0219 03:15:00.260669 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.362106 master-0 kubenswrapper[7776]: I0219 03:15:00.361990 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.362106 master-0 kubenswrapper[7776]: I0219 03:15:00.362091 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkthn\" (UniqueName: \"kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.362639 master-0 kubenswrapper[7776]: I0219 03:15:00.362133 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.363792 master-0 kubenswrapper[7776]: I0219 03:15:00.363718 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.365767 master-0 kubenswrapper[7776]: I0219 03:15:00.365713 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.381975 master-0 kubenswrapper[7776]: I0219 03:15:00.381918 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkthn\" (UniqueName: \"kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn\") pod \"collect-profiles-29524515-txbbt\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.506124 master-0 kubenswrapper[7776]: I0219 03:15:00.505914 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:00.844826 master-0 kubenswrapper[7776]: I0219 03:15:00.844619 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:15:00.845119 master-0 kubenswrapper[7776]: E0219 03:15:00.844915 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:15:00.944314 master-0 kubenswrapper[7776]: I0219 03:15:00.944230 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt"] Feb 19 03:15:00.953123 master-0 kubenswrapper[7776]: W0219 03:15:00.953050 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08a5432_b9f1_4b15_84c4_df9d6276a414.slice/crio-92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c WatchSource:0}: Error finding container 92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c: Status 404 returned error can't find the container with id 92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c Feb 19 03:15:01.842148 master-0 kubenswrapper[7776]: I0219 03:15:01.842038 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:15:01.855759 master-0 kubenswrapper[7776]: I0219 03:15:01.855700 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" event={"ID":"e08a5432-b9f1-4b15-84c4-df9d6276a414","Type":"ContainerStarted","Data":"ca02b8215bf57351b97a8ecbc5b9bfa88dd85ff58f844b1b36f5d8345ce48644"} Feb 19 03:15:01.855759 master-0 kubenswrapper[7776]: I0219 03:15:01.855753 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" event={"ID":"e08a5432-b9f1-4b15-84c4-df9d6276a414","Type":"ContainerStarted","Data":"92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c"} Feb 19 03:15:01.876952 master-0 kubenswrapper[7776]: I0219 03:15:01.876856 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" podStartSLOduration=1.876836731 podStartE2EDuration="1.876836731s" podCreationTimestamp="2026-02-19 03:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:15:01.8757794 +0000 UTC m=+608.215463938" watchObservedRunningTime="2026-02-19 03:15:01.876836731 +0000 UTC m=+608.216521259" Feb 19 03:15:02.865182 master-0 kubenswrapper[7776]: I0219 03:15:02.865129 7776 generic.go:334] "Generic (PLEG): container finished" podID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerID="ca02b8215bf57351b97a8ecbc5b9bfa88dd85ff58f844b1b36f5d8345ce48644" exitCode=0 Feb 19 03:15:02.866476 master-0 kubenswrapper[7776]: I0219 03:15:02.865191 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" event={"ID":"e08a5432-b9f1-4b15-84c4-df9d6276a414","Type":"ContainerDied","Data":"ca02b8215bf57351b97a8ecbc5b9bfa88dd85ff58f844b1b36f5d8345ce48644"} Feb 19 03:15:04.170976 master-0 kubenswrapper[7776]: I0219 03:15:04.170579 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:04.318676 master-0 kubenswrapper[7776]: I0219 03:15:04.318611 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume\") pod \"e08a5432-b9f1-4b15-84c4-df9d6276a414\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " Feb 19 03:15:04.318894 master-0 kubenswrapper[7776]: I0219 03:15:04.318801 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume\") pod \"e08a5432-b9f1-4b15-84c4-df9d6276a414\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " Feb 19 03:15:04.318950 master-0 kubenswrapper[7776]: I0219 03:15:04.318905 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkthn\" (UniqueName: \"kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn\") pod \"e08a5432-b9f1-4b15-84c4-df9d6276a414\" (UID: \"e08a5432-b9f1-4b15-84c4-df9d6276a414\") " Feb 19 03:15:04.320072 master-0 kubenswrapper[7776]: I0219 03:15:04.320003 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume" (OuterVolumeSpecName: "config-volume") pod "e08a5432-b9f1-4b15-84c4-df9d6276a414" (UID: "e08a5432-b9f1-4b15-84c4-df9d6276a414"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:15:04.322508 master-0 kubenswrapper[7776]: I0219 03:15:04.322451 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn" (OuterVolumeSpecName: "kube-api-access-mkthn") pod "e08a5432-b9f1-4b15-84c4-df9d6276a414" (UID: "e08a5432-b9f1-4b15-84c4-df9d6276a414"). InnerVolumeSpecName "kube-api-access-mkthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:15:04.323941 master-0 kubenswrapper[7776]: I0219 03:15:04.323897 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e08a5432-b9f1-4b15-84c4-df9d6276a414" (UID: "e08a5432-b9f1-4b15-84c4-df9d6276a414"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:15:04.421519 master-0 kubenswrapper[7776]: I0219 03:15:04.421331 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkthn\" (UniqueName: \"kubernetes.io/projected/e08a5432-b9f1-4b15-84c4-df9d6276a414-kube-api-access-mkthn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:15:04.421519 master-0 kubenswrapper[7776]: I0219 03:15:04.421390 7776 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e08a5432-b9f1-4b15-84c4-df9d6276a414-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:15:04.421519 master-0 kubenswrapper[7776]: I0219 03:15:04.421406 7776 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08a5432-b9f1-4b15-84c4-df9d6276a414-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:15:04.880102 master-0 kubenswrapper[7776]: I0219 03:15:04.880043 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" event={"ID":"e08a5432-b9f1-4b15-84c4-df9d6276a414","Type":"ContainerDied","Data":"92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c"} Feb 19 03:15:04.880102 master-0 kubenswrapper[7776]: I0219 03:15:04.880093 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c" Feb 19 03:15:04.880357 master-0 kubenswrapper[7776]: I0219 03:15:04.880136 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:15:13.846160 master-0 kubenswrapper[7776]: I0219 03:15:13.846069 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:15:13.846922 master-0 kubenswrapper[7776]: E0219 03:15:13.846290 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:15:26.843552 master-0 kubenswrapper[7776]: I0219 03:15:26.843476 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:15:26.844122 master-0 kubenswrapper[7776]: E0219 03:15:26.843820 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:15:27.011777 master-0 kubenswrapper[7776]: I0219 03:15:27.011710 7776 generic.go:334] "Generic (PLEG): container finished" podID="76470062-ab83-47ed-a669-deeb71996548" containerID="fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d" exitCode=0 Feb 19 03:15:27.011777 master-0 kubenswrapper[7776]: I0219 03:15:27.011758 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerDied","Data":"fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d"} Feb 19 03:15:28.020375 master-0 kubenswrapper[7776]: I0219 03:15:28.020242 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd"} Feb 19 03:15:28.761329 master-0 kubenswrapper[7776]: I0219 03:15:28.761225 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:15:28.764902 master-0 kubenswrapper[7776]: I0219 03:15:28.764820 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:28.764902 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:28.764902 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:28.764902 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:28.765232 master-0 kubenswrapper[7776]: I0219 03:15:28.764927 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:29.030321 master-0 kubenswrapper[7776]: I0219 03:15:29.030103 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/2.log" Feb 19 03:15:29.031309 master-0 kubenswrapper[7776]: I0219 03:15:29.030826 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/1.log" Feb 19 03:15:29.031309 master-0 kubenswrapper[7776]: I0219 03:15:29.031231 7776 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4" exitCode=1 Feb 19 03:15:29.031517 master-0 kubenswrapper[7776]: I0219 03:15:29.031312 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerDied","Data":"0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4"} Feb 19 03:15:29.031517 master-0 kubenswrapper[7776]: I0219 03:15:29.031416 7776 scope.go:117] "RemoveContainer" containerID="4a1578bce100ddf52237ceaea2572cac0b7ea648901d8dde9625de51a4236ef1" Feb 19 03:15:29.031999 master-0 kubenswrapper[7776]: I0219 03:15:29.031958 7776 scope.go:117] "RemoveContainer" containerID="0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4" Feb 19 03:15:29.032338 master-0 kubenswrapper[7776]: E0219 03:15:29.032244 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:15:29.763970 master-0 kubenswrapper[7776]: I0219 03:15:29.763875 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:29.763970 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:29.763970 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:29.763970 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:29.763970 master-0 kubenswrapper[7776]: I0219 03:15:29.763962 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:30.040674 master-0 kubenswrapper[7776]: I0219 03:15:30.040510 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/2.log" Feb 19 03:15:30.763961 master-0 kubenswrapper[7776]: I0219 03:15:30.763850 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:30.763961 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:30.763961 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:30.763961 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:30.764442 master-0 kubenswrapper[7776]: I0219 03:15:30.763970 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:31.764215 master-0 kubenswrapper[7776]: I0219 03:15:31.764093 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:31.764215 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:31.764215 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:31.764215 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:31.764899 master-0 kubenswrapper[7776]: I0219 03:15:31.764246 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:32.764042 master-0 kubenswrapper[7776]: I0219 03:15:32.763956 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:32.764042 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:32.764042 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:32.764042 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:32.764849 master-0 kubenswrapper[7776]: I0219 03:15:32.764043 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:33.763053 master-0 kubenswrapper[7776]: I0219 03:15:33.762992 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:33.763053 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:33.763053 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:33.763053 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:33.763512 master-0 kubenswrapper[7776]: I0219 03:15:33.763068 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:34.764200 master-0 kubenswrapper[7776]: I0219 03:15:34.764110 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:34.764200 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:34.764200 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:34.764200 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:34.765150 master-0 kubenswrapper[7776]: I0219 03:15:34.764238 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:35.763633 master-0 kubenswrapper[7776]: I0219 03:15:35.763563 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:35.763633 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:35.763633 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:35.763633 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:35.764033 master-0 kubenswrapper[7776]: I0219 03:15:35.763642 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:36.763476 master-0 kubenswrapper[7776]: I0219 03:15:36.763351 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:36.763476 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:36.763476 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:36.763476 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:36.764119 master-0 kubenswrapper[7776]: I0219 03:15:36.763554 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:37.761834 master-0 kubenswrapper[7776]: I0219 03:15:37.761775 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:15:37.764628 master-0 kubenswrapper[7776]: I0219 03:15:37.764576 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:37.764628 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:37.764628 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:37.764628 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:37.765407 master-0 kubenswrapper[7776]: I0219 03:15:37.764690 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:38.763784 master-0 kubenswrapper[7776]: I0219 03:15:38.763715 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:38.763784 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:38.763784 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:38.763784 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:38.764088 master-0 kubenswrapper[7776]: I0219 03:15:38.763803 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:39.763520 master-0 kubenswrapper[7776]: I0219 03:15:39.763435 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:39.763520 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:39.763520 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:39.763520 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:39.763520 master-0 kubenswrapper[7776]: I0219 03:15:39.763495 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:40.764815 master-0 kubenswrapper[7776]: I0219 03:15:40.764719 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:40.764815 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:40.764815 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:40.764815 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:40.766205 master-0 kubenswrapper[7776]: I0219 03:15:40.764812 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:40.843689 master-0 kubenswrapper[7776]: I0219 03:15:40.843571 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:15:41.114011 master-0 kubenswrapper[7776]: I0219 03:15:41.113965 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/4.log" Feb 19 03:15:41.115010 master-0 kubenswrapper[7776]: I0219 03:15:41.114978 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0"} Feb 19 03:15:41.763320 master-0 kubenswrapper[7776]: I0219 03:15:41.763213 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:41.763320 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:41.763320 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:41.763320 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:41.764009 master-0 kubenswrapper[7776]: I0219 03:15:41.763337 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:42.126001 master-0 kubenswrapper[7776]: I0219 03:15:42.125899 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:15:42.127036 master-0 kubenswrapper[7776]: I0219 03:15:42.126975 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/4.log" Feb 19 03:15:42.128575 master-0 kubenswrapper[7776]: I0219 03:15:42.128501 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" exitCode=1 Feb 19 03:15:42.128575 master-0 kubenswrapper[7776]: I0219 03:15:42.128561 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0"} Feb 19 03:15:42.128863 master-0 kubenswrapper[7776]: I0219 03:15:42.128638 7776 scope.go:117] "RemoveContainer" containerID="33b908988edc1f23b7e401508114ebee2bcfbcbd665a0f033fed42762138deb6" Feb 19 03:15:42.129946 master-0 kubenswrapper[7776]: I0219 03:15:42.129823 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:15:42.130589 master-0 kubenswrapper[7776]: E0219 03:15:42.130458 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:15:42.764972 master-0 kubenswrapper[7776]: I0219 03:15:42.764898 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:42.764972 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:42.764972 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:42.764972 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:42.765554 master-0 kubenswrapper[7776]: I0219 03:15:42.765510 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:43.138368 master-0 kubenswrapper[7776]: I0219 03:15:43.138308 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:15:43.258399 master-0 kubenswrapper[7776]: E0219 03:15:43.258334 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" podUID="33bb562f-84e7-4fcb-b008-416c09a5ecf0" Feb 19 03:15:43.258604 master-0 kubenswrapper[7776]: E0219 03:15:43.258535 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" podUID="858a717b-a44e-4b8d-9974-7451a89cf104" Feb 19 03:15:43.258699 master-0 kubenswrapper[7776]: E0219 03:15:43.258619 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" podUID="59cea4cb-6374-49b6-97b3-d8a19cc1860f" Feb 19 03:15:43.764320 master-0 kubenswrapper[7776]: I0219 03:15:43.764168 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:43.764320 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:43.764320 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:43.764320 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:43.764805 master-0 kubenswrapper[7776]: I0219 03:15:43.764326 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:44.147754 master-0 kubenswrapper[7776]: I0219 03:15:44.147694 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:15:44.148642 master-0 kubenswrapper[7776]: I0219 03:15:44.147774 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:15:44.148844 master-0 kubenswrapper[7776]: I0219 03:15:44.148799 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:15:44.764028 master-0 kubenswrapper[7776]: I0219 03:15:44.763961 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:44.764028 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:44.764028 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:44.764028 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:44.764627 master-0 kubenswrapper[7776]: I0219 03:15:44.764583 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:44.842729 master-0 kubenswrapper[7776]: I0219 03:15:44.842672 7776 scope.go:117] "RemoveContainer" containerID="0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4" Feb 19 03:15:44.842976 master-0 kubenswrapper[7776]: E0219 03:15:44.842897 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:15:45.270205 master-0 kubenswrapper[7776]: E0219 03:15:45.270057 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" podUID="255784ad-b52a-4c5c-ad15-278865ee2ccb" Feb 19 03:15:45.764373 master-0 kubenswrapper[7776]: I0219 03:15:45.764302 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:45.764373 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:45.764373 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:45.764373 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:45.764672 master-0 kubenswrapper[7776]: I0219 03:15:45.764391 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:46.159805 master-0 kubenswrapper[7776]: I0219 03:15:46.159749 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:15:46.763434 master-0 kubenswrapper[7776]: I0219 03:15:46.763375 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:46.763434 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:46.763434 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:46.763434 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:46.764075 master-0 kubenswrapper[7776]: I0219 03:15:46.763445 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:47.073384 master-0 kubenswrapper[7776]: I0219 03:15:47.073135 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:15:47.073384 master-0 kubenswrapper[7776]: E0219 03:15:47.073352 7776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 19 03:15:47.073981 master-0 kubenswrapper[7776]: E0219 03:15:47.073462 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert podName:33bb562f-84e7-4fcb-b008-416c09a5ecf0 nodeName:}" failed. No retries permitted until 2026-02-19 03:17:49.073434248 +0000 UTC m=+775.413118766 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert") pod "cluster-autoscaler-operator-86b8dc6d6-pd8lj" (UID: "33bb562f-84e7-4fcb-b008-416c09a5ecf0") : secret "cluster-autoscaler-operator-cert" not found Feb 19 03:15:47.174764 master-0 kubenswrapper[7776]: I0219 03:15:47.174700 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:15:47.174972 master-0 kubenswrapper[7776]: I0219 03:15:47.174795 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:15:47.174972 master-0 kubenswrapper[7776]: E0219 03:15:47.174912 7776 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 19 03:15:47.175188 master-0 kubenswrapper[7776]: E0219 03:15:47.175139 7776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 19 03:15:47.175355 master-0 kubenswrapper[7776]: E0219 03:15:47.175303 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls podName:59cea4cb-6374-49b6-97b3-d8a19cc1860f nodeName:}" failed. No retries permitted until 2026-02-19 03:17:49.17523844 +0000 UTC m=+775.514923008 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hl874" (UID: "59cea4cb-6374-49b6-97b3-d8a19cc1860f") : secret "samples-operator-tls" not found Feb 19 03:15:47.175405 master-0 kubenswrapper[7776]: E0219 03:15:47.175382 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:17:49.175363874 +0000 UTC m=+775.515048512 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : secret "cloud-credential-operator-serving-cert" not found Feb 19 03:15:47.764763 master-0 kubenswrapper[7776]: I0219 03:15:47.764645 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:47.764763 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:47.764763 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:47.764763 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:47.764763 master-0 kubenswrapper[7776]: I0219 03:15:47.764727 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:48.764587 master-0 kubenswrapper[7776]: I0219 03:15:48.764491 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:48.764587 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:48.764587 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:48.764587 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:48.765777 master-0 kubenswrapper[7776]: I0219 03:15:48.764587 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:48.801124 master-0 kubenswrapper[7776]: I0219 03:15:48.801026 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:15:48.801436 master-0 kubenswrapper[7776]: E0219 03:15:48.801219 7776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 19 03:15:48.801436 master-0 kubenswrapper[7776]: E0219 03:15:48.801307 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:17:50.801289795 +0000 UTC m=+777.140974323 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : secret "machine-api-operator-tls" not found Feb 19 03:15:49.763135 master-0 kubenswrapper[7776]: I0219 03:15:49.763044 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:49.763135 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:49.763135 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:49.763135 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:49.763450 master-0 kubenswrapper[7776]: I0219 03:15:49.763140 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:50.763746 master-0 kubenswrapper[7776]: I0219 03:15:50.763649 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:50.763746 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:50.763746 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:50.763746 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:50.764480 master-0 kubenswrapper[7776]: I0219 03:15:50.763754 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:51.762532 master-0 kubenswrapper[7776]: I0219 03:15:51.762427 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:51.762532 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:51.762532 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:51.762532 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:51.762532 master-0 kubenswrapper[7776]: I0219 03:15:51.762487 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:52.763679 master-0 kubenswrapper[7776]: I0219 03:15:52.763562 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:52.763679 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:52.763679 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:52.763679 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:52.764812 master-0 kubenswrapper[7776]: I0219 03:15:52.763679 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:53.763599 master-0 kubenswrapper[7776]: I0219 03:15:53.763480 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:53.763599 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:53.763599 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:53.763599 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:53.763599 master-0 kubenswrapper[7776]: I0219 03:15:53.763542 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:54.764910 master-0 kubenswrapper[7776]: I0219 03:15:54.764826 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:54.764910 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:54.764910 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:54.764910 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:54.766050 master-0 kubenswrapper[7776]: I0219 03:15:54.764944 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:54.843553 master-0 kubenswrapper[7776]: I0219 03:15:54.843475 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:15:54.843890 master-0 kubenswrapper[7776]: E0219 03:15:54.843823 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:15:55.763559 master-0 kubenswrapper[7776]: I0219 03:15:55.763455 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:55.763559 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:55.763559 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:55.763559 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:55.763559 master-0 kubenswrapper[7776]: I0219 03:15:55.763550 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:56.763407 master-0 kubenswrapper[7776]: I0219 03:15:56.763342 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:56.763407 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:56.763407 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:56.763407 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:56.763914 master-0 kubenswrapper[7776]: I0219 03:15:56.763419 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:57.763893 master-0 kubenswrapper[7776]: I0219 03:15:57.763824 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:57.763893 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:57.763893 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:57.763893 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:57.763893 master-0 kubenswrapper[7776]: I0219 03:15:57.763883 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:58.764939 master-0 kubenswrapper[7776]: I0219 03:15:58.764855 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:58.764939 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:58.764939 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:58.764939 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:58.766409 master-0 kubenswrapper[7776]: I0219 03:15:58.764966 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:59.763199 master-0 kubenswrapper[7776]: I0219 03:15:59.763114 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:15:59.763199 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:15:59.763199 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:15:59.763199 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:15:59.763199 master-0 kubenswrapper[7776]: I0219 03:15:59.763185 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:15:59.843057 master-0 kubenswrapper[7776]: I0219 03:15:59.842991 7776 scope.go:117] "RemoveContainer" containerID="0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4" Feb 19 03:16:00.243961 master-0 kubenswrapper[7776]: I0219 03:16:00.243859 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/2.log" Feb 19 03:16:00.244374 master-0 kubenswrapper[7776]: I0219 03:16:00.244322 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1"} Feb 19 03:16:00.763150 master-0 kubenswrapper[7776]: I0219 03:16:00.763085 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:00.763150 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:00.763150 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:00.763150 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:00.763500 master-0 kubenswrapper[7776]: I0219 03:16:00.763162 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:01.763606 master-0 kubenswrapper[7776]: I0219 03:16:01.763506 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:01.763606 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:01.763606 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:01.763606 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:01.764644 master-0 kubenswrapper[7776]: I0219 03:16:01.763605 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:02.763973 master-0 kubenswrapper[7776]: I0219 03:16:02.763888 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:02.763973 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:02.763973 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:02.763973 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:02.764961 master-0 kubenswrapper[7776]: I0219 03:16:02.763979 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:03.763734 master-0 kubenswrapper[7776]: I0219 03:16:03.763649 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:03.763734 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:03.763734 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:03.763734 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:03.764484 master-0 kubenswrapper[7776]: I0219 03:16:03.763753 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:04.763875 master-0 kubenswrapper[7776]: I0219 03:16:04.763758 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:04.763875 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:04.763875 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:04.763875 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:04.763875 master-0 kubenswrapper[7776]: I0219 03:16:04.763858 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:05.763426 master-0 kubenswrapper[7776]: I0219 03:16:05.763336 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:05.763426 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:05.763426 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:05.763426 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:05.763763 master-0 kubenswrapper[7776]: I0219 03:16:05.763440 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:06.764428 master-0 kubenswrapper[7776]: I0219 03:16:06.764356 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:06.764428 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:06.764428 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:06.764428 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:06.764428 master-0 kubenswrapper[7776]: I0219 03:16:06.764418 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:07.764390 master-0 kubenswrapper[7776]: I0219 03:16:07.764206 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:07.764390 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:07.764390 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:07.764390 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:07.765399 master-0 kubenswrapper[7776]: I0219 03:16:07.764430 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:07.843331 master-0 kubenswrapper[7776]: I0219 03:16:07.843251 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:16:07.843532 master-0 kubenswrapper[7776]: E0219 03:16:07.843507 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:16:08.763432 master-0 kubenswrapper[7776]: I0219 03:16:08.763340 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:08.763432 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:08.763432 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:08.763432 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:08.763432 master-0 kubenswrapper[7776]: I0219 03:16:08.763402 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:09.764012 master-0 kubenswrapper[7776]: I0219 03:16:09.763910 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:09.764012 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:09.764012 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:09.764012 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:09.765380 master-0 kubenswrapper[7776]: I0219 03:16:09.764011 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:10.763518 master-0 kubenswrapper[7776]: I0219 03:16:10.763436 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:10.763518 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:10.763518 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:10.763518 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:10.763891 master-0 kubenswrapper[7776]: I0219 03:16:10.763523 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:11.763680 master-0 kubenswrapper[7776]: I0219 03:16:11.763609 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:11.763680 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:11.763680 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:11.763680 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:11.764295 master-0 kubenswrapper[7776]: I0219 03:16:11.763696 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:12.764237 master-0 kubenswrapper[7776]: I0219 03:16:12.764125 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:12.764237 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:12.764237 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:12.764237 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:12.765403 master-0 kubenswrapper[7776]: I0219 03:16:12.764238 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:13.763532 master-0 kubenswrapper[7776]: I0219 03:16:13.763479 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:13.763532 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:13.763532 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:13.763532 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:13.763794 master-0 kubenswrapper[7776]: I0219 03:16:13.763565 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:14.764140 master-0 kubenswrapper[7776]: I0219 03:16:14.764083 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:14.764140 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:14.764140 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:14.764140 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:14.764816 master-0 kubenswrapper[7776]: I0219 03:16:14.764758 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:15.763845 master-0 kubenswrapper[7776]: I0219 03:16:15.763754 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:15.763845 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:15.763845 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:15.763845 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:15.763845 master-0 kubenswrapper[7776]: I0219 03:16:15.763823 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:16.763850 master-0 kubenswrapper[7776]: I0219 03:16:16.763698 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:16.763850 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:16.763850 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:16.763850 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:16.763850 master-0 kubenswrapper[7776]: I0219 03:16:16.763786 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:17.764669 master-0 kubenswrapper[7776]: I0219 03:16:17.764576 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:17.764669 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:17.764669 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:17.764669 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:17.765409 master-0 kubenswrapper[7776]: I0219 03:16:17.764673 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:18.765571 master-0 kubenswrapper[7776]: I0219 03:16:18.765481 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:18.765571 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:18.765571 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:18.765571 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:18.766352 master-0 kubenswrapper[7776]: I0219 03:16:18.765608 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:19.763297 master-0 kubenswrapper[7776]: I0219 03:16:19.763169 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:19.763297 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:19.763297 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:19.763297 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:19.763711 master-0 kubenswrapper[7776]: I0219 03:16:19.763322 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:20.765668 master-0 kubenswrapper[7776]: I0219 03:16:20.765585 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:20.765668 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:20.765668 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:20.765668 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:20.766796 master-0 kubenswrapper[7776]: I0219 03:16:20.765696 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:21.764124 master-0 kubenswrapper[7776]: I0219 03:16:21.764005 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:21.764124 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:21.764124 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:21.764124 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:21.764124 master-0 kubenswrapper[7776]: I0219 03:16:21.764089 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:21.843792 master-0 kubenswrapper[7776]: I0219 03:16:21.843681 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:16:21.844671 master-0 kubenswrapper[7776]: E0219 03:16:21.844099 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:16:22.764746 master-0 kubenswrapper[7776]: I0219 03:16:22.764660 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:22.764746 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:22.764746 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:22.764746 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:22.764746 master-0 kubenswrapper[7776]: I0219 03:16:22.764733 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:23.764091 master-0 kubenswrapper[7776]: I0219 03:16:23.763913 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:23.764091 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:23.764091 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:23.764091 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:23.764091 master-0 kubenswrapper[7776]: I0219 03:16:23.764050 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:24.763483 master-0 kubenswrapper[7776]: I0219 03:16:24.763433 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:24.763483 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:24.763483 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:24.763483 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:24.763747 master-0 kubenswrapper[7776]: I0219 03:16:24.763500 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:25.763562 master-0 kubenswrapper[7776]: I0219 03:16:25.763497 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:25.763562 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:25.763562 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:25.763562 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:25.764106 master-0 kubenswrapper[7776]: I0219 03:16:25.763584 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:26.764544 master-0 kubenswrapper[7776]: I0219 03:16:26.764424 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:26.764544 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:26.764544 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:26.764544 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:26.765544 master-0 kubenswrapper[7776]: I0219 03:16:26.764563 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:26.865442 master-0 kubenswrapper[7776]: I0219 03:16:26.865305 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bbwkg"] Feb 19 03:16:26.865824 master-0 kubenswrapper[7776]: E0219 03:16:26.865772 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:16:26.865824 master-0 kubenswrapper[7776]: I0219 03:16:26.865800 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:16:26.865996 master-0 kubenswrapper[7776]: I0219 03:16:26.865941 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:16:26.866654 master-0 kubenswrapper[7776]: I0219 03:16:26.866612 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:26.868830 master-0 kubenswrapper[7776]: I0219 03:16:26.868763 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xq85v" Feb 19 03:16:26.868830 master-0 kubenswrapper[7776]: I0219 03:16:26.868810 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 03:16:26.871015 master-0 kubenswrapper[7776]: I0219 03:16:26.870543 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 03:16:26.871015 master-0 kubenswrapper[7776]: I0219 03:16:26.870935 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 03:16:26.879579 master-0 kubenswrapper[7776]: I0219 03:16:26.879492 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bbwkg"] Feb 19 03:16:26.984601 master-0 kubenswrapper[7776]: I0219 03:16:26.984541 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zww\" (UniqueName: \"kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:26.985294 master-0 kubenswrapper[7776]: I0219 03:16:26.984695 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:27.086593 master-0 kubenswrapper[7776]: I0219 03:16:27.086483 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zww\" (UniqueName: \"kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:27.086958 master-0 kubenswrapper[7776]: I0219 03:16:27.086897 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:27.087084 master-0 kubenswrapper[7776]: E0219 03:16:27.087063 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:27.087139 master-0 kubenswrapper[7776]: E0219 03:16:27.087129 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:27.587111471 +0000 UTC m=+693.926795989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:27.116488 master-0 kubenswrapper[7776]: I0219 03:16:27.116171 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zww\" (UniqueName: \"kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:27.594618 master-0 kubenswrapper[7776]: I0219 03:16:27.594516 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:27.594933 master-0 kubenswrapper[7776]: E0219 03:16:27.594694 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:27.594933 master-0 kubenswrapper[7776]: E0219 03:16:27.594794 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:28.594767958 +0000 UTC m=+694.934452516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:27.763614 master-0 kubenswrapper[7776]: I0219 03:16:27.763554 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:27.763614 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:27.763614 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:27.763614 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:27.763614 master-0 kubenswrapper[7776]: I0219 03:16:27.763612 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:28.613166 master-0 kubenswrapper[7776]: I0219 03:16:28.613024 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:28.614193 master-0 kubenswrapper[7776]: E0219 03:16:28.613401 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:28.614193 master-0 kubenswrapper[7776]: E0219 03:16:28.613551 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:30.613511964 +0000 UTC m=+696.953196522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:28.764411 master-0 kubenswrapper[7776]: I0219 03:16:28.764334 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:28.764411 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:28.764411 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:28.764411 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:28.764825 master-0 kubenswrapper[7776]: I0219 03:16:28.764434 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:29.764344 master-0 kubenswrapper[7776]: I0219 03:16:29.764234 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:29.764344 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:29.764344 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:29.764344 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:29.764344 master-0 kubenswrapper[7776]: I0219 03:16:29.764331 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:30.648164 master-0 kubenswrapper[7776]: I0219 03:16:30.648066 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:30.648488 master-0 kubenswrapper[7776]: E0219 03:16:30.648319 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:30.648488 master-0 kubenswrapper[7776]: E0219 03:16:30.648446 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:34.64841491 +0000 UTC m=+700.988099488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:30.763906 master-0 kubenswrapper[7776]: I0219 03:16:30.763802 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:30.763906 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:30.763906 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:30.763906 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:30.765053 master-0 kubenswrapper[7776]: I0219 03:16:30.763909 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:31.764018 master-0 kubenswrapper[7776]: I0219 03:16:31.763933 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:31.764018 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:31.764018 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:31.764018 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:31.765087 master-0 kubenswrapper[7776]: I0219 03:16:31.764032 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:32.764652 master-0 kubenswrapper[7776]: I0219 03:16:32.764538 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:32.764652 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:32.764652 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:32.764652 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:32.765740 master-0 kubenswrapper[7776]: I0219 03:16:32.764675 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:33.763499 master-0 kubenswrapper[7776]: I0219 03:16:33.763416 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:33.763499 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:33.763499 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:33.763499 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:33.763499 master-0 kubenswrapper[7776]: I0219 03:16:33.763491 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:33.850086 master-0 kubenswrapper[7776]: I0219 03:16:33.849984 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:16:33.851041 master-0 kubenswrapper[7776]: E0219 03:16:33.850340 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:16:34.713428 master-0 kubenswrapper[7776]: I0219 03:16:34.713336 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:34.713746 master-0 kubenswrapper[7776]: E0219 03:16:34.713502 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:34.713746 master-0 kubenswrapper[7776]: E0219 03:16:34.713557 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:42.713542737 +0000 UTC m=+709.053227255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:34.764007 master-0 kubenswrapper[7776]: I0219 03:16:34.763921 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:34.764007 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:34.764007 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:34.764007 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:34.764314 master-0 kubenswrapper[7776]: I0219 03:16:34.764020 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:35.764908 master-0 kubenswrapper[7776]: I0219 03:16:35.764778 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:35.764908 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:35.764908 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:35.764908 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:35.764908 master-0 kubenswrapper[7776]: I0219 03:16:35.764893 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:36.764109 master-0 kubenswrapper[7776]: I0219 03:16:36.764008 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:36.764109 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:36.764109 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:36.764109 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:36.764596 master-0 kubenswrapper[7776]: I0219 03:16:36.764110 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:37.763336 master-0 kubenswrapper[7776]: I0219 03:16:37.763288 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:37.763336 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:37.763336 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:37.763336 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:37.764064 master-0 kubenswrapper[7776]: I0219 03:16:37.764034 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:38.764509 master-0 kubenswrapper[7776]: I0219 03:16:38.764365 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:38.764509 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:38.764509 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:38.764509 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:38.765816 master-0 kubenswrapper[7776]: I0219 03:16:38.764506 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:39.763581 master-0 kubenswrapper[7776]: I0219 03:16:39.763519 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:39.763581 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:39.763581 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:39.763581 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:39.763895 master-0 kubenswrapper[7776]: I0219 03:16:39.763582 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:40.764476 master-0 kubenswrapper[7776]: I0219 03:16:40.764336 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:40.764476 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:40.764476 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:40.764476 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:40.765655 master-0 kubenswrapper[7776]: I0219 03:16:40.764495 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:41.117106 master-0 kubenswrapper[7776]: I0219 03:16:41.116927 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:16:41.117407 master-0 kubenswrapper[7776]: I0219 03:16:41.117247 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" containerName="controller-manager" containerID="cri-o://476fc086e4c133ead58fc958b5e8c61b6a7e9e1ccc96dcde9038878f8f7dbc2a" gracePeriod=30 Feb 19 03:16:41.160570 master-0 kubenswrapper[7776]: I0219 03:16:41.160510 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:16:41.160846 master-0 kubenswrapper[7776]: I0219 03:16:41.160780 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" containerID="cri-o://30c30ae58bac1ba564b708437a7988f71fa6bcce49d387d7985db2d5834df1d5" gracePeriod=30 Feb 19 03:16:41.516589 master-0 kubenswrapper[7776]: I0219 03:16:41.516543 7776 generic.go:334] "Generic (PLEG): container finished" podID="92b9ea7b-01b1-48f8-a392-12200f55502e" containerID="476fc086e4c133ead58fc958b5e8c61b6a7e9e1ccc96dcde9038878f8f7dbc2a" exitCode=0 Feb 19 03:16:41.516825 master-0 kubenswrapper[7776]: I0219 03:16:41.516630 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" event={"ID":"92b9ea7b-01b1-48f8-a392-12200f55502e","Type":"ContainerDied","Data":"476fc086e4c133ead58fc958b5e8c61b6a7e9e1ccc96dcde9038878f8f7dbc2a"} Feb 19 03:16:41.518689 master-0 kubenswrapper[7776]: I0219 03:16:41.518654 7776 generic.go:334] "Generic (PLEG): container finished" podID="ac7a5635-30b4-4076-babb-db1abd26da88" containerID="30c30ae58bac1ba564b708437a7988f71fa6bcce49d387d7985db2d5834df1d5" exitCode=0 Feb 19 03:16:41.518780 master-0 kubenswrapper[7776]: I0219 03:16:41.518704 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerDied","Data":"30c30ae58bac1ba564b708437a7988f71fa6bcce49d387d7985db2d5834df1d5"} Feb 19 03:16:41.518780 master-0 kubenswrapper[7776]: I0219 03:16:41.518763 7776 scope.go:117] "RemoveContainer" containerID="28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706" Feb 19 03:16:41.572928 master-0 kubenswrapper[7776]: I0219 03:16:41.572809 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:16:41.580677 master-0 kubenswrapper[7776]: E0219 03:16:41.580079 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706\": container with ID starting with 28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706 not found: ID does not exist" containerID="28e9a6d187a12869ec261835ca18a693541d1e5178c38a94171dac51f3ea3706" Feb 19 03:16:41.580677 master-0 kubenswrapper[7776]: I0219 03:16:41.580209 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:16:41.723955 master-0 kubenswrapper[7776]: I0219 03:16:41.723794 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config\") pod \"92b9ea7b-01b1-48f8-a392-12200f55502e\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " Feb 19 03:16:41.723955 master-0 kubenswrapper[7776]: I0219 03:16:41.723910 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca\") pod \"ac7a5635-30b4-4076-babb-db1abd26da88\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " Feb 19 03:16:41.723955 master-0 kubenswrapper[7776]: I0219 03:16:41.723931 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert\") pod \"92b9ea7b-01b1-48f8-a392-12200f55502e\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " Feb 19 03:16:41.723955 master-0 kubenswrapper[7776]: I0219 03:16:41.723949 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles\") pod \"92b9ea7b-01b1-48f8-a392-12200f55502e\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " Feb 19 03:16:41.724333 master-0 kubenswrapper[7776]: I0219 03:16:41.723999 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert\") pod \"ac7a5635-30b4-4076-babb-db1abd26da88\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " Feb 19 03:16:41.724333 master-0 kubenswrapper[7776]: I0219 03:16:41.724020 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz4j7\" (UniqueName: \"kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7\") pod \"ac7a5635-30b4-4076-babb-db1abd26da88\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " Feb 19 03:16:41.724333 master-0 kubenswrapper[7776]: I0219 03:16:41.724046 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzzzx\" (UniqueName: \"kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx\") pod \"92b9ea7b-01b1-48f8-a392-12200f55502e\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " Feb 19 03:16:41.724333 master-0 kubenswrapper[7776]: I0219 03:16:41.724065 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca\") pod \"92b9ea7b-01b1-48f8-a392-12200f55502e\" (UID: \"92b9ea7b-01b1-48f8-a392-12200f55502e\") " Feb 19 03:16:41.725151 master-0 kubenswrapper[7776]: I0219 03:16:41.724515 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca" (OuterVolumeSpecName: "client-ca") pod "92b9ea7b-01b1-48f8-a392-12200f55502e" (UID: "92b9ea7b-01b1-48f8-a392-12200f55502e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:16:41.725151 master-0 kubenswrapper[7776]: I0219 03:16:41.724601 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac7a5635-30b4-4076-babb-db1abd26da88" (UID: "ac7a5635-30b4-4076-babb-db1abd26da88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:16:41.725151 master-0 kubenswrapper[7776]: I0219 03:16:41.724932 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config\") pod \"ac7a5635-30b4-4076-babb-db1abd26da88\" (UID: \"ac7a5635-30b4-4076-babb-db1abd26da88\") " Feb 19 03:16:41.725151 master-0 kubenswrapper[7776]: I0219 03:16:41.725076 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "92b9ea7b-01b1-48f8-a392-12200f55502e" (UID: "92b9ea7b-01b1-48f8-a392-12200f55502e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:16:41.725570 master-0 kubenswrapper[7776]: I0219 03:16:41.725515 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.725570 master-0 kubenswrapper[7776]: I0219 03:16:41.725543 7776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.725570 master-0 kubenswrapper[7776]: I0219 03:16:41.725555 7776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.726184 master-0 kubenswrapper[7776]: I0219 03:16:41.725836 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config" (OuterVolumeSpecName: "config") pod "ac7a5635-30b4-4076-babb-db1abd26da88" (UID: "ac7a5635-30b4-4076-babb-db1abd26da88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:16:41.726341 master-0 kubenswrapper[7776]: I0219 03:16:41.726320 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config" (OuterVolumeSpecName: "config") pod "92b9ea7b-01b1-48f8-a392-12200f55502e" (UID: "92b9ea7b-01b1-48f8-a392-12200f55502e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:16:41.727287 master-0 kubenswrapper[7776]: I0219 03:16:41.727175 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "92b9ea7b-01b1-48f8-a392-12200f55502e" (UID: "92b9ea7b-01b1-48f8-a392-12200f55502e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:16:41.727788 master-0 kubenswrapper[7776]: I0219 03:16:41.727743 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx" (OuterVolumeSpecName: "kube-api-access-qzzzx") pod "92b9ea7b-01b1-48f8-a392-12200f55502e" (UID: "92b9ea7b-01b1-48f8-a392-12200f55502e"). InnerVolumeSpecName "kube-api-access-qzzzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:16:41.729352 master-0 kubenswrapper[7776]: I0219 03:16:41.728303 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac7a5635-30b4-4076-babb-db1abd26da88" (UID: "ac7a5635-30b4-4076-babb-db1abd26da88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:16:41.729705 master-0 kubenswrapper[7776]: I0219 03:16:41.729680 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7" (OuterVolumeSpecName: "kube-api-access-pz4j7") pod "ac7a5635-30b4-4076-babb-db1abd26da88" (UID: "ac7a5635-30b4-4076-babb-db1abd26da88"). InnerVolumeSpecName "kube-api-access-pz4j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:16:41.763577 master-0 kubenswrapper[7776]: I0219 03:16:41.763513 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:41.763577 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:41.763577 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:41.763577 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:41.763835 master-0 kubenswrapper[7776]: I0219 03:16:41.763595 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826654 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b9ea7b-01b1-48f8-a392-12200f55502e-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826717 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b9ea7b-01b1-48f8-a392-12200f55502e-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826729 7776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7a5635-30b4-4076-babb-db1abd26da88-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826739 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz4j7\" (UniqueName: \"kubernetes.io/projected/ac7a5635-30b4-4076-babb-db1abd26da88-kube-api-access-pz4j7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826748 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzzzx\" (UniqueName: \"kubernetes.io/projected/92b9ea7b-01b1-48f8-a392-12200f55502e-kube-api-access-qzzzx\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:41.826762 master-0 kubenswrapper[7776]: I0219 03:16:41.826770 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7a5635-30b4-4076-babb-db1abd26da88-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:16:42.527909 master-0 kubenswrapper[7776]: I0219 03:16:42.527712 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" event={"ID":"92b9ea7b-01b1-48f8-a392-12200f55502e","Type":"ContainerDied","Data":"da9326e28b041d7dc63f371ad8d216b0ae776b310880756403a8af27c882da99"} Feb 19 03:16:42.527909 master-0 kubenswrapper[7776]: I0219 03:16:42.527770 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j" Feb 19 03:16:42.527909 master-0 kubenswrapper[7776]: I0219 03:16:42.527778 7776 scope.go:117] "RemoveContainer" containerID="476fc086e4c133ead58fc958b5e8c61b6a7e9e1ccc96dcde9038878f8f7dbc2a" Feb 19 03:16:42.530908 master-0 kubenswrapper[7776]: I0219 03:16:42.530239 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" event={"ID":"ac7a5635-30b4-4076-babb-db1abd26da88","Type":"ContainerDied","Data":"9aab81f8fffe16923e36dcbe72b0019b49222f1dac9a784d86a86eaf9cc57c9d"} Feb 19 03:16:42.530908 master-0 kubenswrapper[7776]: I0219 03:16:42.530483 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp" Feb 19 03:16:42.544839 master-0 kubenswrapper[7776]: I0219 03:16:42.542985 7776 scope.go:117] "RemoveContainer" containerID="30c30ae58bac1ba564b708437a7988f71fa6bcce49d387d7985db2d5834df1d5" Feb 19 03:16:42.556568 master-0 kubenswrapper[7776]: I0219 03:16:42.556057 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:16:42.562339 master-0 kubenswrapper[7776]: I0219 03:16:42.562283 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j"] Feb 19 03:16:42.575834 master-0 kubenswrapper[7776]: I0219 03:16:42.575704 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:16:42.586301 master-0 kubenswrapper[7776]: I0219 03:16:42.586231 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp"] Feb 19 03:16:42.738733 master-0 kubenswrapper[7776]: I0219 03:16:42.738649 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:42.738964 master-0 kubenswrapper[7776]: E0219 03:16:42.738861 7776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 19 03:16:42.738964 master-0 kubenswrapper[7776]: E0219 03:16:42.738936 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:16:58.738917573 +0000 UTC m=+725.078602101 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : secret "canary-serving-cert" not found Feb 19 03:16:42.764497 master-0 kubenswrapper[7776]: I0219 03:16:42.764434 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:42.764497 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:42.764497 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:42.764497 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:42.764851 master-0 kubenswrapper[7776]: I0219 03:16:42.764515 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:42.776903 master-0 kubenswrapper[7776]: I0219 03:16:42.776840 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: E0219 03:16:42.777075 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" containerName="controller-manager" Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: I0219 03:16:42.777087 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" containerName="controller-manager" Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: E0219 03:16:42.777097 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: I0219 03:16:42.777104 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: E0219 03:16:42.777115 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.777122 master-0 kubenswrapper[7776]: I0219 03:16:42.777124 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.778951 master-0 kubenswrapper[7776]: I0219 03:16:42.777227 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.778951 master-0 kubenswrapper[7776]: I0219 03:16:42.777250 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" containerName="controller-manager" Feb 19 03:16:42.778951 master-0 kubenswrapper[7776]: I0219 03:16:42.778098 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:42.784462 master-0 kubenswrapper[7776]: I0219 03:16:42.784203 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:16:42.785296 master-0 kubenswrapper[7776]: I0219 03:16:42.784778 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-mfb9m" Feb 19 03:16:42.785296 master-0 kubenswrapper[7776]: I0219 03:16:42.785013 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:16:42.785296 master-0 kubenswrapper[7776]: I0219 03:16:42.785248 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:16:42.787154 master-0 kubenswrapper[7776]: I0219 03:16:42.785583 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:16:42.787154 master-0 kubenswrapper[7776]: I0219 03:16:42.785729 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:16:42.787154 master-0 kubenswrapper[7776]: I0219 03:16:42.785941 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:16:42.787154 master-0 kubenswrapper[7776]: I0219 03:16:42.786070 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" containerName="route-controller-manager" Feb 19 03:16:42.787154 master-0 kubenswrapper[7776]: I0219 03:16:42.786471 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:42.789351 master-0 kubenswrapper[7776]: I0219 03:16:42.789071 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:16:42.789351 master-0 kubenswrapper[7776]: I0219 03:16:42.789101 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:16:42.789351 master-0 kubenswrapper[7776]: I0219 03:16:42.789170 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-26rv4" Feb 19 03:16:42.793665 master-0 kubenswrapper[7776]: I0219 03:16:42.793613 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:16:42.793665 master-0 kubenswrapper[7776]: I0219 03:16:42.793619 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:16:42.794236 master-0 kubenswrapper[7776]: I0219 03:16:42.794188 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:16:42.798275 master-0 kubenswrapper[7776]: I0219 03:16:42.797961 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:16:42.799188 master-0 kubenswrapper[7776]: I0219 03:16:42.799037 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:16:42.807034 master-0 kubenswrapper[7776]: I0219 03:16:42.806959 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:16:42.941563 master-0 kubenswrapper[7776]: I0219 03:16:42.941469 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.941583 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.941710 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.941749 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.941815 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.942060 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.942454 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:42.942654 master-0 kubenswrapper[7776]: I0219 03:16:42.942583 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:42.943416 master-0 kubenswrapper[7776]: I0219 03:16:42.942801 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.044099 master-0 kubenswrapper[7776]: I0219 03:16:43.043968 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.044099 master-0 kubenswrapper[7776]: I0219 03:16:43.044051 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.044320 master-0 kubenswrapper[7776]: I0219 03:16:43.044142 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.044320 master-0 kubenswrapper[7776]: I0219 03:16:43.044167 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.044660 master-0 kubenswrapper[7776]: I0219 03:16:43.044618 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.045077 master-0 kubenswrapper[7776]: I0219 03:16:43.044681 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.045406 master-0 kubenswrapper[7776]: I0219 03:16:43.045001 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.045490 master-0 kubenswrapper[7776]: I0219 03:16:43.045444 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.045848 master-0 kubenswrapper[7776]: I0219 03:16:43.045803 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.046346 master-0 kubenswrapper[7776]: I0219 03:16:43.046288 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.046465 master-0 kubenswrapper[7776]: I0219 03:16:43.046341 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.046559 master-0 kubenswrapper[7776]: I0219 03:16:43.046456 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.046625 master-0 kubenswrapper[7776]: I0219 03:16:43.046609 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.047285 master-0 kubenswrapper[7776]: I0219 03:16:43.047198 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.048136 master-0 kubenswrapper[7776]: I0219 03:16:43.048073 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.050538 master-0 kubenswrapper[7776]: I0219 03:16:43.050498 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.067448 master-0 kubenswrapper[7776]: I0219 03:16:43.067375 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.068021 master-0 kubenswrapper[7776]: I0219 03:16:43.067982 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.116502 master-0 kubenswrapper[7776]: I0219 03:16:43.116449 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:43.142739 master-0 kubenswrapper[7776]: I0219 03:16:43.142669 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:43.502359 master-0 kubenswrapper[7776]: I0219 03:16:43.502291 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:16:43.506583 master-0 kubenswrapper[7776]: W0219 03:16:43.506164 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6acd115e_71e1_4a50_8892_fc6ea2927fec.slice/crio-75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd WatchSource:0}: Error finding container 75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd: Status 404 returned error can't find the container with id 75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd Feb 19 03:16:43.543015 master-0 kubenswrapper[7776]: I0219 03:16:43.542928 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" event={"ID":"6acd115e-71e1-4a50-8892-fc6ea2927fec","Type":"ContainerStarted","Data":"75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd"} Feb 19 03:16:43.559198 master-0 kubenswrapper[7776]: I0219 03:16:43.558974 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:16:43.572095 master-0 kubenswrapper[7776]: W0219 03:16:43.572048 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06898300_c6e2_4d64_9ebf_d20f4338cccc.slice/crio-9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8 WatchSource:0}: Error finding container 9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8: Status 404 returned error can't find the container with id 9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8 Feb 19 03:16:43.762730 master-0 kubenswrapper[7776]: I0219 03:16:43.762599 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:43.762730 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:43.762730 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:43.762730 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:43.762730 master-0 kubenswrapper[7776]: I0219 03:16:43.762663 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:43.850385 master-0 kubenswrapper[7776]: I0219 03:16:43.850313 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b9ea7b-01b1-48f8-a392-12200f55502e" path="/var/lib/kubelet/pods/92b9ea7b-01b1-48f8-a392-12200f55502e/volumes" Feb 19 03:16:43.851288 master-0 kubenswrapper[7776]: I0219 03:16:43.851221 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7a5635-30b4-4076-babb-db1abd26da88" path="/var/lib/kubelet/pods/ac7a5635-30b4-4076-babb-db1abd26da88/volumes" Feb 19 03:16:44.553158 master-0 kubenswrapper[7776]: I0219 03:16:44.553011 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerStarted","Data":"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668"} Feb 19 03:16:44.553158 master-0 kubenswrapper[7776]: I0219 03:16:44.553073 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerStarted","Data":"9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8"} Feb 19 03:16:44.553663 master-0 kubenswrapper[7776]: I0219 03:16:44.553377 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:44.554716 master-0 kubenswrapper[7776]: I0219 03:16:44.554672 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" event={"ID":"6acd115e-71e1-4a50-8892-fc6ea2927fec","Type":"ContainerStarted","Data":"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76"} Feb 19 03:16:44.554878 master-0 kubenswrapper[7776]: I0219 03:16:44.554850 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:44.557921 master-0 kubenswrapper[7776]: I0219 03:16:44.557880 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:16:44.558878 master-0 kubenswrapper[7776]: I0219 03:16:44.558847 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:16:44.577045 master-0 kubenswrapper[7776]: I0219 03:16:44.576946 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" podStartSLOduration=3.576927819 podStartE2EDuration="3.576927819s" podCreationTimestamp="2026-02-19 03:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:16:44.574833169 +0000 UTC m=+710.914517687" watchObservedRunningTime="2026-02-19 03:16:44.576927819 +0000 UTC m=+710.916612357" Feb 19 03:16:44.610121 master-0 kubenswrapper[7776]: I0219 03:16:44.610044 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" podStartSLOduration=3.610023875 podStartE2EDuration="3.610023875s" podCreationTimestamp="2026-02-19 03:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:16:44.609003216 +0000 UTC m=+710.948687754" watchObservedRunningTime="2026-02-19 03:16:44.610023875 +0000 UTC m=+710.949708403" Feb 19 03:16:44.763689 master-0 kubenswrapper[7776]: I0219 03:16:44.763635 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:44.763689 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:44.763689 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:44.763689 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:44.763959 master-0 kubenswrapper[7776]: I0219 03:16:44.763695 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:45.763849 master-0 kubenswrapper[7776]: I0219 03:16:45.763794 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:45.763849 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:45.763849 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:45.763849 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:45.764415 master-0 kubenswrapper[7776]: I0219 03:16:45.763864 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:45.843147 master-0 kubenswrapper[7776]: I0219 03:16:45.843070 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:16:45.843435 master-0 kubenswrapper[7776]: E0219 03:16:45.843398 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:16:46.763932 master-0 kubenswrapper[7776]: I0219 03:16:46.763866 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:46.763932 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:46.763932 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:46.763932 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:46.764672 master-0 kubenswrapper[7776]: I0219 03:16:46.764482 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:47.711917 master-0 kubenswrapper[7776]: E0219 03:16:47.711864 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" podUID="e2e81865-21fa-4e35-a870-738c13ac5b70" Feb 19 03:16:47.770313 master-0 kubenswrapper[7776]: I0219 03:16:47.770135 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:47.770313 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:47.770313 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:47.770313 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:47.770313 master-0 kubenswrapper[7776]: I0219 03:16:47.770195 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:47.927767 master-0 kubenswrapper[7776]: I0219 03:16:47.927706 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bq57"] Feb 19 03:16:47.928528 master-0 kubenswrapper[7776]: I0219 03:16:47.928491 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:47.930499 master-0 kubenswrapper[7776]: I0219 03:16:47.930467 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 19 03:16:47.930938 master-0 kubenswrapper[7776]: I0219 03:16:47.930916 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2bdg8" Feb 19 03:16:48.025225 master-0 kubenswrapper[7776]: I0219 03:16:48.025088 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.025225 master-0 kubenswrapper[7776]: I0219 03:16:48.025190 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.025487 master-0 kubenswrapper[7776]: I0219 03:16:48.025317 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.025487 master-0 kubenswrapper[7776]: I0219 03:16:48.025378 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkddk\" (UniqueName: \"kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.126581 master-0 kubenswrapper[7776]: I0219 03:16:48.126512 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.126803 master-0 kubenswrapper[7776]: I0219 03:16:48.126595 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkddk\" (UniqueName: \"kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.126803 master-0 kubenswrapper[7776]: I0219 03:16:48.126722 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.126803 master-0 kubenswrapper[7776]: I0219 03:16:48.126732 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.126803 master-0 kubenswrapper[7776]: I0219 03:16:48.126753 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.127193 master-0 kubenswrapper[7776]: I0219 03:16:48.127158 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.127665 master-0 kubenswrapper[7776]: I0219 03:16:48.127625 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.146783 master-0 kubenswrapper[7776]: I0219 03:16:48.146731 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkddk\" (UniqueName: \"kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk\") pod \"cni-sysctl-allowlist-ds-9bq57\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.246845 master-0 kubenswrapper[7776]: I0219 03:16:48.246792 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.263481 master-0 kubenswrapper[7776]: W0219 03:16:48.263417 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cdccda9_48ed_4823_a717_99dd1716383a.slice/crio-1d9b2d562ca318ca7aa1397a7e55c515f0bc118aea8c40c8a869a1845dea2184 WatchSource:0}: Error finding container 1d9b2d562ca318ca7aa1397a7e55c515f0bc118aea8c40c8a869a1845dea2184: Status 404 returned error can't find the container with id 1d9b2d562ca318ca7aa1397a7e55c515f0bc118aea8c40c8a869a1845dea2184 Feb 19 03:16:48.575470 master-0 kubenswrapper[7776]: I0219 03:16:48.575359 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:16:48.575798 master-0 kubenswrapper[7776]: I0219 03:16:48.575350 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" event={"ID":"5cdccda9-48ed-4823-a717-99dd1716383a","Type":"ContainerStarted","Data":"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8"} Feb 19 03:16:48.575864 master-0 kubenswrapper[7776]: I0219 03:16:48.575805 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" event={"ID":"5cdccda9-48ed-4823-a717-99dd1716383a","Type":"ContainerStarted","Data":"1d9b2d562ca318ca7aa1397a7e55c515f0bc118aea8c40c8a869a1845dea2184"} Feb 19 03:16:48.576049 master-0 kubenswrapper[7776]: I0219 03:16:48.576002 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:48.592951 master-0 kubenswrapper[7776]: I0219 03:16:48.592874 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" podStartSLOduration=1.59285459 podStartE2EDuration="1.59285459s" podCreationTimestamp="2026-02-19 03:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:16:48.589853824 +0000 UTC m=+714.929538352" watchObservedRunningTime="2026-02-19 03:16:48.59285459 +0000 UTC m=+714.932539118" Feb 19 03:16:48.764178 master-0 kubenswrapper[7776]: I0219 03:16:48.764106 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:48.764178 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:48.764178 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:48.764178 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:48.764577 master-0 kubenswrapper[7776]: I0219 03:16:48.764177 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:49.598830 master-0 kubenswrapper[7776]: I0219 03:16:49.598777 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:16:49.663662 master-0 kubenswrapper[7776]: I0219 03:16:49.663594 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 19 03:16:49.664715 master-0 kubenswrapper[7776]: I0219 03:16:49.664679 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.668380 master-0 kubenswrapper[7776]: I0219 03:16:49.668338 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dcb4l" Feb 19 03:16:49.669593 master-0 kubenswrapper[7776]: I0219 03:16:49.669568 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 03:16:49.681793 master-0 kubenswrapper[7776]: I0219 03:16:49.681736 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 19 03:16:49.749401 master-0 kubenswrapper[7776]: I0219 03:16:49.749349 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.749762 master-0 kubenswrapper[7776]: I0219 03:16:49.749739 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.749922 master-0 kubenswrapper[7776]: I0219 03:16:49.749903 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.762898 master-0 kubenswrapper[7776]: I0219 03:16:49.762847 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:49.762898 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:49.762898 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:49.762898 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:49.763248 master-0 kubenswrapper[7776]: I0219 03:16:49.762923 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:49.852238 master-0 kubenswrapper[7776]: I0219 03:16:49.852082 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.852465 master-0 kubenswrapper[7776]: I0219 03:16:49.852304 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.852465 master-0 kubenswrapper[7776]: I0219 03:16:49.852428 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.852610 master-0 kubenswrapper[7776]: I0219 03:16:49.852535 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.852771 master-0 kubenswrapper[7776]: I0219 03:16:49.852686 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.870932 master-0 kubenswrapper[7776]: I0219 03:16:49.870616 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:49.928174 master-0 kubenswrapper[7776]: I0219 03:16:49.928091 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bq57"] Feb 19 03:16:49.992620 master-0 kubenswrapper[7776]: I0219 03:16:49.992533 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:16:50.411460 master-0 kubenswrapper[7776]: I0219 03:16:50.411411 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 19 03:16:50.416008 master-0 kubenswrapper[7776]: W0219 03:16:50.415956 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf2d9bbbb_77bd_4978_9f37_d3c54b780fbf.slice/crio-43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f WatchSource:0}: Error finding container 43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f: Status 404 returned error can't find the container with id 43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f Feb 19 03:16:50.596596 master-0 kubenswrapper[7776]: I0219 03:16:50.595529 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf","Type":"ContainerStarted","Data":"43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f"} Feb 19 03:16:50.763487 master-0 kubenswrapper[7776]: I0219 03:16:50.763396 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:50.763487 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:50.763487 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:50.763487 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:50.764028 master-0 kubenswrapper[7776]: I0219 03:16:50.763489 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:51.482323 master-0 kubenswrapper[7776]: I0219 03:16:51.482232 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:16:51.485606 master-0 kubenswrapper[7776]: I0219 03:16:51.485563 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:16:51.578225 master-0 kubenswrapper[7776]: I0219 03:16:51.578156 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6bg2z" Feb 19 03:16:51.586344 master-0 kubenswrapper[7776]: I0219 03:16:51.586285 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:16:51.603533 master-0 kubenswrapper[7776]: I0219 03:16:51.603476 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf","Type":"ContainerStarted","Data":"13f1d80c6e6d45699a9dea951ab1e9a8aa64be91ab5359ccb9eae52f989fd916"} Feb 19 03:16:51.603711 master-0 kubenswrapper[7776]: I0219 03:16:51.603639 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" gracePeriod=30 Feb 19 03:16:51.639312 master-0 kubenswrapper[7776]: I0219 03:16:51.638914 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=2.638898433 podStartE2EDuration="2.638898433s" podCreationTimestamp="2026-02-19 03:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:16:51.637105782 +0000 UTC m=+717.976790320" watchObservedRunningTime="2026-02-19 03:16:51.638898433 +0000 UTC m=+717.978582951" Feb 19 03:16:51.791581 master-0 kubenswrapper[7776]: I0219 03:16:51.784053 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:51.791581 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:51.791581 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:51.791581 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:51.791581 master-0 kubenswrapper[7776]: I0219 03:16:51.784154 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:51.973546 master-0 kubenswrapper[7776]: I0219 03:16:51.973507 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-tkbxr"] Feb 19 03:16:51.975523 master-0 kubenswrapper[7776]: I0219 03:16:51.975493 7776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:16:52.612883 master-0 kubenswrapper[7776]: I0219 03:16:52.612794 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" event={"ID":"e2e81865-21fa-4e35-a870-738c13ac5b70","Type":"ContainerStarted","Data":"7113d80392d29ba3714ca17e946cc57862288af6721d6bbfe7532c4452680bbe"} Feb 19 03:16:52.764237 master-0 kubenswrapper[7776]: I0219 03:16:52.764170 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:52.764237 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:52.764237 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:52.764237 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:52.764919 master-0 kubenswrapper[7776]: I0219 03:16:52.764865 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:53.621140 master-0 kubenswrapper[7776]: I0219 03:16:53.621062 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" event={"ID":"e2e81865-21fa-4e35-a870-738c13ac5b70","Type":"ContainerStarted","Data":"8d976459ec4ea42f9768390cfa7af2c61949cd7da21f839968476bb8770520b8"} Feb 19 03:16:53.765121 master-0 kubenswrapper[7776]: I0219 03:16:53.765059 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:53.765121 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:53.765121 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:53.765121 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:53.765449 master-0 kubenswrapper[7776]: I0219 03:16:53.765144 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:54.632670 master-0 kubenswrapper[7776]: I0219 03:16:54.632616 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" event={"ID":"e2e81865-21fa-4e35-a870-738c13ac5b70","Type":"ContainerStarted","Data":"9ac466c1f5efca72b337ad9908b3b37e23152814065474e735a72c3d9c8c35c6"} Feb 19 03:16:54.655671 master-0 kubenswrapper[7776]: I0219 03:16:54.655570 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" podStartSLOduration=252.201698878 podStartE2EDuration="4m13.655546846s" podCreationTimestamp="2026-02-19 03:12:41 +0000 UTC" firstStartedPulling="2026-02-19 03:16:51.975440418 +0000 UTC m=+718.315124926" lastFinishedPulling="2026-02-19 03:16:53.429288376 +0000 UTC m=+719.768972894" observedRunningTime="2026-02-19 03:16:54.654170136 +0000 UTC m=+720.993854684" watchObservedRunningTime="2026-02-19 03:16:54.655546846 +0000 UTC m=+720.995231374" Feb 19 03:16:54.739085 master-0 kubenswrapper[7776]: I0219 03:16:54.739037 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:16:54.741981 master-0 kubenswrapper[7776]: I0219 03:16:54.741935 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:16:54.764186 master-0 kubenswrapper[7776]: I0219 03:16:54.764135 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:54.764186 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:54.764186 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:54.764186 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:54.764340 master-0 kubenswrapper[7776]: I0219 03:16:54.764200 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:54.945202 master-0 kubenswrapper[7776]: I0219 03:16:54.945080 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7wq8f" Feb 19 03:16:54.953742 master-0 kubenswrapper[7776]: I0219 03:16:54.953654 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:16:55.642008 master-0 kubenswrapper[7776]: I0219 03:16:55.641952 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" event={"ID":"92804daf-1fd0-4008-afff-4f9bc362990b","Type":"ContainerStarted","Data":"71f11883f7e9702227e5d3c496609c5bed8a84a13f75eaf3076fb1f33e489052"} Feb 19 03:16:55.642714 master-0 kubenswrapper[7776]: I0219 03:16:55.642687 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" event={"ID":"92804daf-1fd0-4008-afff-4f9bc362990b","Type":"ContainerStarted","Data":"4a9aeacf90564eae1348bcdc7f41abed1c44fe0cbc7faf0930e743893a5e4611"} Feb 19 03:16:55.763478 master-0 kubenswrapper[7776]: I0219 03:16:55.763422 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:55.763478 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:55.763478 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:55.763478 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:55.763478 master-0 kubenswrapper[7776]: I0219 03:16:55.763497 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:56.741163 master-0 kubenswrapper[7776]: I0219 03:16:56.741102 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj"] Feb 19 03:16:56.742200 master-0 kubenswrapper[7776]: I0219 03:16:56.742170 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.745827 master-0 kubenswrapper[7776]: I0219 03:16:56.745746 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 19 03:16:56.745974 master-0 kubenswrapper[7776]: I0219 03:16:56.745920 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-b5db9" Feb 19 03:16:56.746103 master-0 kubenswrapper[7776]: I0219 03:16:56.746075 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 19 03:16:56.756904 master-0 kubenswrapper[7776]: I0219 03:16:56.756863 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-m7mdb"] Feb 19 03:16:56.759284 master-0 kubenswrapper[7776]: I0219 03:16:56.758575 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.760641 master-0 kubenswrapper[7776]: I0219 03:16:56.760613 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5w4jw" Feb 19 03:16:56.761021 master-0 kubenswrapper[7776]: I0219 03:16:56.761003 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 19 03:16:56.761280 master-0 kubenswrapper[7776]: I0219 03:16:56.761248 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 19 03:16:56.761540 master-0 kubenswrapper[7776]: I0219 03:16:56.761523 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: I0219 03:16:56.768479 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: I0219 03:16:56.768584 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: I0219 03:16:56.770610 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj"] Feb 19 03:16:56.773705 master-0 kubenswrapper[7776]: I0219 03:16:56.773580 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-m7mdb"] Feb 19 03:16:56.776421 master-0 kubenswrapper[7776]: I0219 03:16:56.776164 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.776421 master-0 kubenswrapper[7776]: I0219 03:16:56.776289 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.776421 master-0 kubenswrapper[7776]: I0219 03:16:56.776333 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.776421 master-0 kubenswrapper[7776]: I0219 03:16:56.776361 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.776421 master-0 kubenswrapper[7776]: I0219 03:16:56.776384 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.776608 master-0 kubenswrapper[7776]: I0219 03:16:56.776540 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4z8t\" (UniqueName: \"kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.776608 master-0 kubenswrapper[7776]: I0219 03:16:56.776593 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.776673 master-0 kubenswrapper[7776]: I0219 03:16:56.776638 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.777047 master-0 kubenswrapper[7776]: I0219 03:16:56.776715 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4lz\" (UniqueName: \"kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.777047 master-0 kubenswrapper[7776]: I0219 03:16:56.776765 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.846743 master-0 kubenswrapper[7776]: I0219 03:16:56.846068 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:16:56.846743 master-0 kubenswrapper[7776]: E0219 03:16:56.846295 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:16:56.860311 master-0 kubenswrapper[7776]: I0219 03:16:56.860228 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-8g26m"] Feb 19 03:16:56.861657 master-0 kubenswrapper[7776]: I0219 03:16:56.861625 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.864566 master-0 kubenswrapper[7776]: I0219 03:16:56.863466 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 19 03:16:56.864566 master-0 kubenswrapper[7776]: I0219 03:16:56.863755 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-jmtfb" Feb 19 03:16:56.864566 master-0 kubenswrapper[7776]: I0219 03:16:56.864072 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 19 03:16:56.878070 master-0 kubenswrapper[7776]: I0219 03:16:56.878008 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4lz\" (UniqueName: \"kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.879866 master-0 kubenswrapper[7776]: I0219 03:16:56.879840 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.880020 master-0 kubenswrapper[7776]: I0219 03:16:56.879999 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.882031 master-0 kubenswrapper[7776]: I0219 03:16:56.881983 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.882129 master-0 kubenswrapper[7776]: I0219 03:16:56.881242 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.882417 master-0 kubenswrapper[7776]: I0219 03:16:56.882356 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.883074 master-0 kubenswrapper[7776]: I0219 03:16:56.883033 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.883342 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.883660 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.883784 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4z8t\" (UniqueName: \"kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.883874 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.883961 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.884246 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887051 master-0 kubenswrapper[7776]: I0219 03:16:56.886873 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.887470 master-0 kubenswrapper[7776]: I0219 03:16:56.887111 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.890031 master-0 kubenswrapper[7776]: I0219 03:16:56.889207 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.890031 master-0 kubenswrapper[7776]: I0219 03:16:56.889982 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.895394 master-0 kubenswrapper[7776]: I0219 03:16:56.895205 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.902401 master-0 kubenswrapper[7776]: I0219 03:16:56.900766 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4lz\" (UniqueName: \"kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:56.906463 master-0 kubenswrapper[7776]: I0219 03:16:56.906432 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4z8t\" (UniqueName: \"kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:56.986039 master-0 kubenswrapper[7776]: I0219 03:16:56.985979 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986272 master-0 kubenswrapper[7776]: I0219 03:16:56.986049 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986272 master-0 kubenswrapper[7776]: I0219 03:16:56.986107 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986272 master-0 kubenswrapper[7776]: I0219 03:16:56.986133 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986272 master-0 kubenswrapper[7776]: I0219 03:16:56.986158 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxmv\" (UniqueName: \"kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986402 master-0 kubenswrapper[7776]: I0219 03:16:56.986319 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986402 master-0 kubenswrapper[7776]: I0219 03:16:56.986386 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:56.986463 master-0 kubenswrapper[7776]: I0219 03:16:56.986420 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.082121 master-0 kubenswrapper[7776]: I0219 03:16:57.078959 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.088245 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.088484 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.088549 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089460 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089513 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089538 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxmv\" (UniqueName: \"kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089625 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089677 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089705 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.089828 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: E0219 03:16:57.090396 7776 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: E0219 03:16:57.090485 7776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls podName:8ec16b3a-5d5c-46fe-87f0-89f93a2775ed nodeName:}" failed. No retries permitted until 2026-02-19 03:16:57.590462901 +0000 UTC m=+723.930147419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls") pod "node-exporter-8g26m" (UID: "8ec16b3a-5d5c-46fe-87f0-89f93a2775ed") : secret "node-exporter-tls" not found Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.090480 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.090638 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.091822 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.096284 master-0 kubenswrapper[7776]: I0219 03:16:57.095277 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.107416 master-0 kubenswrapper[7776]: I0219 03:16:57.100444 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:16:57.113363 master-0 kubenswrapper[7776]: I0219 03:16:57.111327 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxmv\" (UniqueName: \"kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.471883 master-0 kubenswrapper[7776]: I0219 03:16:57.471850 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj"] Feb 19 03:16:57.544013 master-0 kubenswrapper[7776]: I0219 03:16:57.543953 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-m7mdb"] Feb 19 03:16:57.595509 master-0 kubenswrapper[7776]: I0219 03:16:57.595454 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.598831 master-0 kubenswrapper[7776]: I0219 03:16:57.598796 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.630604 master-0 kubenswrapper[7776]: I0219 03:16:57.630538 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h"] Feb 19 03:16:57.631549 master-0 kubenswrapper[7776]: I0219 03:16:57.631520 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.633486 master-0 kubenswrapper[7776]: I0219 03:16:57.633445 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p55fn" Feb 19 03:16:57.641642 master-0 kubenswrapper[7776]: I0219 03:16:57.641581 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h"] Feb 19 03:16:57.654385 master-0 kubenswrapper[7776]: I0219 03:16:57.654341 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" event={"ID":"ec677f3d-06c4-4cf4-9f24-69894b9a9118","Type":"ContainerStarted","Data":"4a4075ac7bf30cf0807cbb607815178772dc5e91f6a2b4d72d3b7f7d98bacf78"} Feb 19 03:16:57.655820 master-0 kubenswrapper[7776]: I0219 03:16:57.655444 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" event={"ID":"92804daf-1fd0-4008-afff-4f9bc362990b","Type":"ContainerStarted","Data":"75ea874391f33c0fa200e27a6fbad18b4a8573ebe40f901e494bc7cfe2905ed3"} Feb 19 03:16:57.657032 master-0 kubenswrapper[7776]: I0219 03:16:57.656833 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" event={"ID":"43560ec3-3526-40e1-aeb7-e3137a99171d","Type":"ContainerStarted","Data":"443ba370c5c253b8146f1c1da7d0528aed97ce5e409c9560e64322c478438305"} Feb 19 03:16:57.657032 master-0 kubenswrapper[7776]: I0219 03:16:57.656856 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" event={"ID":"43560ec3-3526-40e1-aeb7-e3137a99171d","Type":"ContainerStarted","Data":"48d4606b470a81b62815d5eff7b40ce10241cd1db0d833c19e9920f2538a3f32"} Feb 19 03:16:57.657851 master-0 kubenswrapper[7776]: I0219 03:16:57.657689 7776 generic.go:334] "Generic (PLEG): container finished" podID="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" containerID="1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849" exitCode=0 Feb 19 03:16:57.657851 master-0 kubenswrapper[7776]: I0219 03:16:57.657713 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" event={"ID":"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4","Type":"ContainerDied","Data":"1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849"} Feb 19 03:16:57.657980 master-0 kubenswrapper[7776]: I0219 03:16:57.657913 7776 scope.go:117] "RemoveContainer" containerID="1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849" Feb 19 03:16:57.677191 master-0 kubenswrapper[7776]: I0219 03:16:57.677082 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" podStartSLOduration=251.988188508 podStartE2EDuration="4m13.677057567s" podCreationTimestamp="2026-02-19 03:12:44 +0000 UTC" firstStartedPulling="2026-02-19 03:16:55.25884261 +0000 UTC m=+721.598527128" lastFinishedPulling="2026-02-19 03:16:56.947711669 +0000 UTC m=+723.287396187" observedRunningTime="2026-02-19 03:16:57.670420138 +0000 UTC m=+724.010104666" watchObservedRunningTime="2026-02-19 03:16:57.677057567 +0000 UTC m=+724.016742095" Feb 19 03:16:57.696935 master-0 kubenswrapper[7776]: I0219 03:16:57.696876 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk722\" (UniqueName: \"kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.697438 master-0 kubenswrapper[7776]: I0219 03:16:57.697395 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.763512 master-0 kubenswrapper[7776]: I0219 03:16:57.763438 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:57.763512 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:57.763512 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:57.763512 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:57.764084 master-0 kubenswrapper[7776]: I0219 03:16:57.763526 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:57.788612 master-0 kubenswrapper[7776]: I0219 03:16:57.786225 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:16:57.801776 master-0 kubenswrapper[7776]: I0219 03:16:57.801694 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk722\" (UniqueName: \"kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.801961 master-0 kubenswrapper[7776]: I0219 03:16:57.801899 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.807013 master-0 kubenswrapper[7776]: I0219 03:16:57.806983 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:57.817581 master-0 kubenswrapper[7776]: I0219 03:16:57.817544 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk722\" (UniqueName: \"kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:58.001743 master-0 kubenswrapper[7776]: I0219 03:16:58.001684 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:16:58.249539 master-0 kubenswrapper[7776]: E0219 03:16:58.249286 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:16:58.251036 master-0 kubenswrapper[7776]: E0219 03:16:58.251001 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:16:58.252743 master-0 kubenswrapper[7776]: E0219 03:16:58.252711 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:16:58.252797 master-0 kubenswrapper[7776]: E0219 03:16:58.252743 7776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:16:58.414894 master-0 kubenswrapper[7776]: I0219 03:16:58.414828 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h"] Feb 19 03:16:58.671618 master-0 kubenswrapper[7776]: I0219 03:16:58.671430 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8g26m" event={"ID":"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed","Type":"ContainerStarted","Data":"2e6d01c66ad4ba09830602801e48d0eb21df8043e491a9222312021d0c71dccd"} Feb 19 03:16:58.673626 master-0 kubenswrapper[7776]: I0219 03:16:58.673583 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" event={"ID":"43560ec3-3526-40e1-aeb7-e3137a99171d","Type":"ContainerStarted","Data":"18e8293887c315d043b114addcbe548bb44868b01fd18e0f803f65bf0a00d49f"} Feb 19 03:16:58.675096 master-0 kubenswrapper[7776]: I0219 03:16:58.675060 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" event={"ID":"7be6f9b5-fe27-4df5-b933-63bbb12f680c","Type":"ContainerStarted","Data":"65ad415188511f2d4ecfebaf8ebe20e79da869721e3a60479b8b9d077a3ca314"} Feb 19 03:16:58.675096 master-0 kubenswrapper[7776]: I0219 03:16:58.675088 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" event={"ID":"7be6f9b5-fe27-4df5-b933-63bbb12f680c","Type":"ContainerStarted","Data":"63a61882dcf77787697d30aeb41db64cf3a3a5917a3f53104880927ba62c1424"} Feb 19 03:16:58.677237 master-0 kubenswrapper[7776]: I0219 03:16:58.677162 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" event={"ID":"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4","Type":"ContainerStarted","Data":"d5c73f2ddafe10fce03270b2aebd85160ac086bdb8c3653dc1795d235063c350"} Feb 19 03:16:58.762566 master-0 kubenswrapper[7776]: I0219 03:16:58.762516 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:58.762566 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:58.762566 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:58.762566 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:58.762914 master-0 kubenswrapper[7776]: I0219 03:16:58.762579 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:58.823407 master-0 kubenswrapper[7776]: I0219 03:16:58.819828 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:58.823407 master-0 kubenswrapper[7776]: I0219 03:16:58.822987 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:58.997671 master-0 kubenswrapper[7776]: I0219 03:16:58.997563 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xq85v" Feb 19 03:16:59.006767 master-0 kubenswrapper[7776]: I0219 03:16:59.006658 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:16:59.688151 master-0 kubenswrapper[7776]: I0219 03:16:59.688065 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" event={"ID":"7be6f9b5-fe27-4df5-b933-63bbb12f680c","Type":"ContainerStarted","Data":"d6dee62d67dcf444f9abcea0875716ca3b612d1228b7b107680e75b104a598a1"} Feb 19 03:16:59.692212 master-0 kubenswrapper[7776]: I0219 03:16:59.691506 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" event={"ID":"ec677f3d-06c4-4cf4-9f24-69894b9a9118","Type":"ContainerStarted","Data":"cbf5c1d99d3cbbf5b9abf381194c2aac882ddf92716241b3a751c662c21bc34b"} Feb 19 03:16:59.699041 master-0 kubenswrapper[7776]: I0219 03:16:59.698975 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8g26m" event={"ID":"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed","Type":"ContainerStarted","Data":"03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328"} Feb 19 03:16:59.708638 master-0 kubenswrapper[7776]: I0219 03:16:59.707913 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" podStartSLOduration=2.707893647 podStartE2EDuration="2.707893647s" podCreationTimestamp="2026-02-19 03:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:16:59.707194847 +0000 UTC m=+726.046879365" watchObservedRunningTime="2026-02-19 03:16:59.707893647 +0000 UTC m=+726.047578175" Feb 19 03:16:59.710107 master-0 kubenswrapper[7776]: I0219 03:16:59.710045 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" event={"ID":"43560ec3-3526-40e1-aeb7-e3137a99171d","Type":"ContainerStarted","Data":"a3fbbf79f62ffec254c86c41fb7d9cc9a8a9dd7b012180199fcc7f8dd2579583"} Feb 19 03:16:59.758322 master-0 kubenswrapper[7776]: I0219 03:16:59.758266 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:16:59.759130 master-0 kubenswrapper[7776]: I0219 03:16:59.759099 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="multus-admission-controller" containerID="cri-o://7779bc9360a96d18f167a4e3e0b6db49a68f34d021af87222f6e2c102a74d376" gracePeriod=30 Feb 19 03:16:59.759408 master-0 kubenswrapper[7776]: I0219 03:16:59.759165 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="kube-rbac-proxy" containerID="cri-o://e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c" gracePeriod=30 Feb 19 03:16:59.766484 master-0 kubenswrapper[7776]: I0219 03:16:59.765931 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:16:59.766484 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:16:59.766484 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:16:59.766484 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:16:59.766484 master-0 kubenswrapper[7776]: I0219 03:16:59.765990 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:16:59.788468 master-0 kubenswrapper[7776]: I0219 03:16:59.784908 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" podStartSLOduration=2.140218124 podStartE2EDuration="3.784886229s" podCreationTimestamp="2026-02-19 03:16:56 +0000 UTC" firstStartedPulling="2026-02-19 03:16:57.811032879 +0000 UTC m=+724.150717387" lastFinishedPulling="2026-02-19 03:16:59.455700974 +0000 UTC m=+725.795385492" observedRunningTime="2026-02-19 03:16:59.783469478 +0000 UTC m=+726.123154006" watchObservedRunningTime="2026-02-19 03:16:59.784886229 +0000 UTC m=+726.124570747" Feb 19 03:16:59.839755 master-0 kubenswrapper[7776]: I0219 03:16:59.839706 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bbwkg"] Feb 19 03:17:00.720968 master-0 kubenswrapper[7776]: I0219 03:17:00.720847 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" event={"ID":"ec677f3d-06c4-4cf4-9f24-69894b9a9118","Type":"ContainerStarted","Data":"1adba697b868a8d8656ab3e3b7eee2ae25fc68d36ceb7328ad76c65564689dae"} Feb 19 03:17:00.720968 master-0 kubenswrapper[7776]: I0219 03:17:00.720942 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" event={"ID":"ec677f3d-06c4-4cf4-9f24-69894b9a9118","Type":"ContainerStarted","Data":"463dde529bac4be2b695cefff5d9fb148836d959f691183673675b2fe90acb4e"} Feb 19 03:17:00.722575 master-0 kubenswrapper[7776]: I0219 03:17:00.722513 7776 generic.go:334] "Generic (PLEG): container finished" podID="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" containerID="03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328" exitCode=0 Feb 19 03:17:00.722690 master-0 kubenswrapper[7776]: I0219 03:17:00.722580 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8g26m" event={"ID":"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed","Type":"ContainerDied","Data":"03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328"} Feb 19 03:17:00.724370 master-0 kubenswrapper[7776]: I0219 03:17:00.724168 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bbwkg" event={"ID":"a676c43c-4e0a-4826-86c1-288260611b09","Type":"ContainerStarted","Data":"7e7a779e6971f82c6e40b1101545253d3d2415b6025cb5b0294ddf6967fda76d"} Feb 19 03:17:00.724370 master-0 kubenswrapper[7776]: I0219 03:17:00.724211 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bbwkg" event={"ID":"a676c43c-4e0a-4826-86c1-288260611b09","Type":"ContainerStarted","Data":"1be6fbce0be2d2a600566ad7a089efc0d76906ae49f8bc93720c22ae930e1161"} Feb 19 03:17:00.726742 master-0 kubenswrapper[7776]: I0219 03:17:00.726680 7776 generic.go:334] "Generic (PLEG): container finished" podID="947faa21-7f67-4c7e-abb0-443432f38961" containerID="e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c" exitCode=0 Feb 19 03:17:00.726859 master-0 kubenswrapper[7776]: I0219 03:17:00.726745 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerDied","Data":"e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c"} Feb 19 03:17:00.752683 master-0 kubenswrapper[7776]: I0219 03:17:00.752554 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" podStartSLOduration=2.8602523570000002 podStartE2EDuration="4.752527103s" podCreationTimestamp="2026-02-19 03:16:56 +0000 UTC" firstStartedPulling="2026-02-19 03:16:57.554442881 +0000 UTC m=+723.894127399" lastFinishedPulling="2026-02-19 03:16:59.446717627 +0000 UTC m=+725.786402145" observedRunningTime="2026-02-19 03:17:00.748001803 +0000 UTC m=+727.087686331" watchObservedRunningTime="2026-02-19 03:17:00.752527103 +0000 UTC m=+727.092211631" Feb 19 03:17:00.762765 master-0 kubenswrapper[7776]: I0219 03:17:00.762710 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:00.762765 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:00.762765 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:00.762765 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:00.763052 master-0 kubenswrapper[7776]: I0219 03:17:00.762809 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:00.800869 master-0 kubenswrapper[7776]: I0219 03:17:00.799910 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bbwkg" podStartSLOduration=34.799880907 podStartE2EDuration="34.799880907s" podCreationTimestamp="2026-02-19 03:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:00.796741847 +0000 UTC m=+727.136426375" watchObservedRunningTime="2026-02-19 03:17:00.799880907 +0000 UTC m=+727.139565455" Feb 19 03:17:01.016089 master-0 kubenswrapper[7776]: I0219 03:17:01.016028 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 19 03:17:01.017297 master-0 kubenswrapper[7776]: I0219 03:17:01.017244 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.019687 master-0 kubenswrapper[7776]: I0219 03:17:01.019512 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-rqfgf" Feb 19 03:17:01.019687 master-0 kubenswrapper[7776]: I0219 03:17:01.019520 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 19 03:17:01.028363 master-0 kubenswrapper[7776]: I0219 03:17:01.028303 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 19 03:17:01.069694 master-0 kubenswrapper[7776]: I0219 03:17:01.069657 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.069893 master-0 kubenswrapper[7776]: I0219 03:17:01.069732 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.069893 master-0 kubenswrapper[7776]: I0219 03:17:01.069823 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.171938 master-0 kubenswrapper[7776]: I0219 03:17:01.171890 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.172143 master-0 kubenswrapper[7776]: I0219 03:17:01.171973 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.172143 master-0 kubenswrapper[7776]: I0219 03:17:01.171997 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.172143 master-0 kubenswrapper[7776]: I0219 03:17:01.172125 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.172233 master-0 kubenswrapper[7776]: I0219 03:17:01.172167 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.188451 master-0 kubenswrapper[7776]: I0219 03:17:01.188379 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.392413 master-0 kubenswrapper[7776]: I0219 03:17:01.391921 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:01.738393 master-0 kubenswrapper[7776]: I0219 03:17:01.738245 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8g26m" event={"ID":"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed","Type":"ContainerStarted","Data":"f47f5c2617d7ec11a7618f3301d492b41e2d0e4bec16d61ac756a37525ae7a7a"} Feb 19 03:17:01.738393 master-0 kubenswrapper[7776]: I0219 03:17:01.738308 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8g26m" event={"ID":"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed","Type":"ContainerStarted","Data":"01c510bb4e294e5e3c099c0476d39126b60a961834b000d3ddc7d25fc6cead51"} Feb 19 03:17:01.763972 master-0 kubenswrapper[7776]: I0219 03:17:01.763917 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:01.763972 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:01.763972 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:01.763972 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:01.764630 master-0 kubenswrapper[7776]: I0219 03:17:01.763989 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:01.772892 master-0 kubenswrapper[7776]: I0219 03:17:01.772594 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-8g26m" podStartSLOduration=4.109946597 podStartE2EDuration="5.772555265s" podCreationTimestamp="2026-02-19 03:16:56 +0000 UTC" firstStartedPulling="2026-02-19 03:16:57.828339414 +0000 UTC m=+724.168023922" lastFinishedPulling="2026-02-19 03:16:59.490948072 +0000 UTC m=+725.830632590" observedRunningTime="2026-02-19 03:17:01.763717082 +0000 UTC m=+728.103401670" watchObservedRunningTime="2026-02-19 03:17:01.772555265 +0000 UTC m=+728.112239823" Feb 19 03:17:01.819473 master-0 kubenswrapper[7776]: I0219 03:17:01.819402 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 19 03:17:02.168805 master-0 kubenswrapper[7776]: I0219 03:17:02.168431 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:17:02.170724 master-0 kubenswrapper[7776]: I0219 03:17:02.170600 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.172562 master-0 kubenswrapper[7776]: I0219 03:17:02.172510 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kjppx" Feb 19 03:17:02.174246 master-0 kubenswrapper[7776]: I0219 03:17:02.174190 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-b5da4s4ugo88o" Feb 19 03:17:02.174395 master-0 kubenswrapper[7776]: I0219 03:17:02.174366 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 19 03:17:02.174688 master-0 kubenswrapper[7776]: I0219 03:17:02.174649 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 19 03:17:02.175315 master-0 kubenswrapper[7776]: I0219 03:17:02.175288 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 19 03:17:02.175661 master-0 kubenswrapper[7776]: I0219 03:17:02.175635 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 19 03:17:02.190781 master-0 kubenswrapper[7776]: I0219 03:17:02.190706 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:17:02.292971 master-0 kubenswrapper[7776]: I0219 03:17:02.292816 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.292971 master-0 kubenswrapper[7776]: I0219 03:17:02.292873 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.293285 master-0 kubenswrapper[7776]: I0219 03:17:02.293071 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.293285 master-0 kubenswrapper[7776]: I0219 03:17:02.293108 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.293285 master-0 kubenswrapper[7776]: I0219 03:17:02.293138 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.293285 master-0 kubenswrapper[7776]: I0219 03:17:02.293169 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.293285 master-0 kubenswrapper[7776]: I0219 03:17:02.293201 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394539 master-0 kubenswrapper[7776]: I0219 03:17:02.394489 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394539 master-0 kubenswrapper[7776]: I0219 03:17:02.394537 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394744 master-0 kubenswrapper[7776]: I0219 03:17:02.394602 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394744 master-0 kubenswrapper[7776]: I0219 03:17:02.394632 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394744 master-0 kubenswrapper[7776]: I0219 03:17:02.394664 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394744 master-0 kubenswrapper[7776]: I0219 03:17:02.394711 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.394744 master-0 kubenswrapper[7776]: I0219 03:17:02.394729 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.396001 master-0 kubenswrapper[7776]: I0219 03:17:02.395928 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.396918 master-0 kubenswrapper[7776]: I0219 03:17:02.396867 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.397917 master-0 kubenswrapper[7776]: I0219 03:17:02.397852 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.399384 master-0 kubenswrapper[7776]: I0219 03:17:02.399352 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.399465 master-0 kubenswrapper[7776]: I0219 03:17:02.399414 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.399949 master-0 kubenswrapper[7776]: I0219 03:17:02.399902 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.421070 master-0 kubenswrapper[7776]: I0219 03:17:02.421014 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.535546 master-0 kubenswrapper[7776]: I0219 03:17:02.535494 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:02.753859 master-0 kubenswrapper[7776]: I0219 03:17:02.753078 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"402778fb-ac93-4d3a-bc4e-7416c49a4061","Type":"ContainerStarted","Data":"e1a07313a2933802cf62d384385baaaecb3c372bcb5aabbcc186bb282740e81b"} Feb 19 03:17:02.753859 master-0 kubenswrapper[7776]: I0219 03:17:02.753186 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"402778fb-ac93-4d3a-bc4e-7416c49a4061","Type":"ContainerStarted","Data":"d86702a952f96c82b209454f5a8421f9f15531387895bfc549a591987747f66a"} Feb 19 03:17:02.764774 master-0 kubenswrapper[7776]: I0219 03:17:02.764331 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:02.764774 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:02.764774 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:02.764774 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:02.764774 master-0 kubenswrapper[7776]: I0219 03:17:02.764432 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:02.781140 master-0 kubenswrapper[7776]: I0219 03:17:02.781048 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=1.781028106 podStartE2EDuration="1.781028106s" podCreationTimestamp="2026-02-19 03:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:02.779344658 +0000 UTC m=+729.119029186" watchObservedRunningTime="2026-02-19 03:17:02.781028106 +0000 UTC m=+729.120712634" Feb 19 03:17:03.004990 master-0 kubenswrapper[7776]: I0219 03:17:03.004935 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:17:03.763226 master-0 kubenswrapper[7776]: I0219 03:17:03.763165 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:03.763226 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:03.763226 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:03.763226 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:03.763805 master-0 kubenswrapper[7776]: I0219 03:17:03.763244 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:03.765701 master-0 kubenswrapper[7776]: I0219 03:17:03.765659 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" event={"ID":"22370ccf-c383-4c1e-96f2-b5c61bb0cebe","Type":"ContainerStarted","Data":"383b491b9f27144fe9b7a96c0308977fdc414552864afb1ce6b22fbacc40b8ac"} Feb 19 03:17:04.762447 master-0 kubenswrapper[7776]: I0219 03:17:04.762380 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:04.762447 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:04.762447 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:04.762447 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:04.762716 master-0 kubenswrapper[7776]: I0219 03:17:04.762460 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:05.764670 master-0 kubenswrapper[7776]: I0219 03:17:05.764628 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:05.764670 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:05.764670 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:05.764670 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:05.765386 master-0 kubenswrapper[7776]: I0219 03:17:05.765361 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:05.779617 master-0 kubenswrapper[7776]: I0219 03:17:05.779555 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" event={"ID":"22370ccf-c383-4c1e-96f2-b5c61bb0cebe","Type":"ContainerStarted","Data":"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c"} Feb 19 03:17:05.801325 master-0 kubenswrapper[7776]: I0219 03:17:05.801222 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" podStartSLOduration=2.080413273 podStartE2EDuration="3.801201586s" podCreationTimestamp="2026-02-19 03:17:02 +0000 UTC" firstStartedPulling="2026-02-19 03:17:03.016186181 +0000 UTC m=+729.355870699" lastFinishedPulling="2026-02-19 03:17:04.736974504 +0000 UTC m=+731.076659012" observedRunningTime="2026-02-19 03:17:05.800392683 +0000 UTC m=+732.140077201" watchObservedRunningTime="2026-02-19 03:17:05.801201586 +0000 UTC m=+732.140886114" Feb 19 03:17:06.763516 master-0 kubenswrapper[7776]: I0219 03:17:06.763461 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:06.763516 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:06.763516 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:06.763516 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:06.763516 master-0 kubenswrapper[7776]: I0219 03:17:06.763519 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:07.640315 master-0 kubenswrapper[7776]: I0219 03:17:07.640166 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/0.log" Feb 19 03:17:07.764161 master-0 kubenswrapper[7776]: I0219 03:17:07.764046 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:07.764161 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:07.764161 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:07.764161 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:07.764625 master-0 kubenswrapper[7776]: I0219 03:17:07.764159 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:07.840714 master-0 kubenswrapper[7776]: I0219 03:17:07.840622 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/1.log" Feb 19 03:17:07.843076 master-0 kubenswrapper[7776]: I0219 03:17:07.842998 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:17:07.843455 master-0 kubenswrapper[7776]: E0219 03:17:07.843364 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:17:08.040065 master-0 kubenswrapper[7776]: I0219 03:17:08.039883 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-t6jnq_76470062-ab83-47ed-a669-deeb71996548/router/0.log" Feb 19 03:17:08.240442 master-0 kubenswrapper[7776]: I0219 03:17:08.240350 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-t6jnq_76470062-ab83-47ed-a669-deeb71996548/router/1.log" Feb 19 03:17:08.249337 master-0 kubenswrapper[7776]: E0219 03:17:08.249248 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:08.251379 master-0 kubenswrapper[7776]: E0219 03:17:08.250805 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:08.264060 master-0 kubenswrapper[7776]: E0219 03:17:08.263459 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:08.264430 master-0 kubenswrapper[7776]: E0219 03:17:08.264078 7776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:17:08.435706 master-0 kubenswrapper[7776]: I0219 03:17:08.435112 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/fix-audit-permissions/0.log" Feb 19 03:17:08.645099 master-0 kubenswrapper[7776]: I0219 03:17:08.645007 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/oauth-apiserver/0.log" Feb 19 03:17:08.765615 master-0 kubenswrapper[7776]: I0219 03:17:08.765455 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:08.765615 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:08.765615 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:08.765615 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:08.765615 master-0 kubenswrapper[7776]: I0219 03:17:08.765531 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:08.832521 master-0 kubenswrapper[7776]: I0219 03:17:08.832440 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:17:09.034218 master-0 kubenswrapper[7776]: I0219 03:17:09.034044 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/4.log" Feb 19 03:17:09.231220 master-0 kubenswrapper[7776]: I0219 03:17:09.231126 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/setup/0.log" Feb 19 03:17:09.430724 master-0 kubenswrapper[7776]: I0219 03:17:09.430629 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-ensure-env-vars/0.log" Feb 19 03:17:09.630735 master-0 kubenswrapper[7776]: I0219 03:17:09.630651 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-resources-copy/0.log" Feb 19 03:17:09.764641 master-0 kubenswrapper[7776]: I0219 03:17:09.764454 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:09.764641 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:09.764641 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:09.764641 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:09.764641 master-0 kubenswrapper[7776]: I0219 03:17:09.764568 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:09.831865 master-0 kubenswrapper[7776]: I0219 03:17:09.831806 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 19 03:17:10.035541 master-0 kubenswrapper[7776]: I0219 03:17:10.035377 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 19 03:17:10.233089 master-0 kubenswrapper[7776]: I0219 03:17:10.233038 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:17:10.430874 master-0 kubenswrapper[7776]: I0219 03:17:10.430835 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-readyz/0.log" Feb 19 03:17:10.631749 master-0 kubenswrapper[7776]: I0219 03:17:10.631688 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:17:10.764015 master-0 kubenswrapper[7776]: I0219 03:17:10.763819 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:10.764015 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:10.764015 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:10.764015 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:10.764015 master-0 kubenswrapper[7776]: I0219 03:17:10.763880 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:10.837669 master-0 kubenswrapper[7776]: I0219 03:17:10.837627 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_2561caa0-5f79-496e-8fa7-a9692dca20be/installer/0.log" Feb 19 03:17:11.031242 master-0 kubenswrapper[7776]: I0219 03:17:11.031095 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:17:11.233755 master-0 kubenswrapper[7776]: I0219 03:17:11.233664 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/3.log" Feb 19 03:17:11.432406 master-0 kubenswrapper[7776]: I0219 03:17:11.432322 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 19 03:17:11.640868 master-0 kubenswrapper[7776]: I0219 03:17:11.640701 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 19 03:17:11.763764 master-0 kubenswrapper[7776]: I0219 03:17:11.763625 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:11.763764 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:11.763764 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:11.763764 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:11.763764 master-0 kubenswrapper[7776]: I0219 03:17:11.763708 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:11.830978 master-0 kubenswrapper[7776]: I0219 03:17:11.830907 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 19 03:17:12.036068 master-0 kubenswrapper[7776]: I0219 03:17:12.035964 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:17:12.236707 master-0 kubenswrapper[7776]: I0219 03:17:12.236663 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:17:12.435340 master-0 kubenswrapper[7776]: I0219 03:17:12.435289 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-retry-1-master-0_f2d9bbbb-77bd-4978-9f37-d3c54b780fbf/installer/0.log" Feb 19 03:17:12.632386 master-0 kubenswrapper[7776]: I0219 03:17:12.632297 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:17:12.764823 master-0 kubenswrapper[7776]: I0219 03:17:12.764661 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:12.764823 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:12.764823 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:12.764823 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:12.764823 master-0 kubenswrapper[7776]: I0219 03:17:12.764756 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:12.839525 master-0 kubenswrapper[7776]: I0219 03:17:12.839453 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/3.log" Feb 19 03:17:13.037178 master-0 kubenswrapper[7776]: I0219 03:17:13.037056 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 19 03:17:13.237496 master-0 kubenswrapper[7776]: I0219 03:17:13.237433 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 19 03:17:13.434783 master-0 kubenswrapper[7776]: I0219 03:17:13.434741 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/4.log" Feb 19 03:17:13.635508 master-0 kubenswrapper[7776]: I0219 03:17:13.635441 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/1.log" Feb 19 03:17:13.764490 master-0 kubenswrapper[7776]: I0219 03:17:13.764336 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:13.764490 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:13.764490 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:13.764490 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:13.764490 master-0 kubenswrapper[7776]: I0219 03:17:13.764428 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:13.837467 master-0 kubenswrapper[7776]: I0219 03:17:13.837410 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 19 03:17:14.040208 master-0 kubenswrapper[7776]: I0219 03:17:14.040071 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 19 03:17:14.233622 master-0 kubenswrapper[7776]: I0219 03:17:14.233514 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:17:14.436181 master-0 kubenswrapper[7776]: I0219 03:17:14.436136 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-retry-1-master-0_402778fb-ac93-4d3a-bc4e-7416c49a4061/installer/0.log" Feb 19 03:17:14.630689 master-0 kubenswrapper[7776]: I0219 03:17:14.630626 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:17:14.763275 master-0 kubenswrapper[7776]: I0219 03:17:14.763170 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:14.763275 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:14.763275 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:14.763275 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:14.763571 master-0 kubenswrapper[7776]: I0219 03:17:14.763545 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:14.841636 master-0 kubenswrapper[7776]: I0219 03:17:14.841570 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/3.log" Feb 19 03:17:15.030600 master-0 kubenswrapper[7776]: I0219 03:17:15.030492 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:17:15.232397 master-0 kubenswrapper[7776]: I0219 03:17:15.232322 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/2.log" Feb 19 03:17:15.431277 master-0 kubenswrapper[7776]: I0219 03:17:15.431198 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/fix-audit-permissions/0.log" Feb 19 03:17:15.639209 master-0 kubenswrapper[7776]: I0219 03:17:15.639125 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/openshift-apiserver/0.log" Feb 19 03:17:15.764232 master-0 kubenswrapper[7776]: I0219 03:17:15.764065 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:15.764232 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:15.764232 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:15.764232 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:15.764232 master-0 kubenswrapper[7776]: I0219 03:17:15.764175 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:15.836732 master-0 kubenswrapper[7776]: I0219 03:17:15.836668 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-957b9456f-f5s8c_c569676a-51dd-418c-87a5-719c18fe4c95/openshift-apiserver-check-endpoints/0.log" Feb 19 03:17:16.033114 master-0 kubenswrapper[7776]: I0219 03:17:16.032896 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:17:16.241321 master-0 kubenswrapper[7776]: I0219 03:17:16.241212 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/4.log" Feb 19 03:17:16.433149 master-0 kubenswrapper[7776]: I0219 03:17:16.433075 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:17:16.635879 master-0 kubenswrapper[7776]: I0219 03:17:16.635831 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/4.log" Feb 19 03:17:16.764846 master-0 kubenswrapper[7776]: I0219 03:17:16.764685 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:16.764846 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:16.764846 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:16.764846 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:16.764846 master-0 kubenswrapper[7776]: I0219 03:17:16.764789 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:16.843853 master-0 kubenswrapper[7776]: I0219 03:17:16.843763 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7b74b5f84f-v8ldx_06898300-c6e2-4d64-9ebf-d20f4338cccc/controller-manager/0.log" Feb 19 03:17:17.038212 master-0 kubenswrapper[7776]: I0219 03:17:17.038065 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-895bf76d5-65vdk_6acd115e-71e1-4a50-8892-fc6ea2927fec/route-controller-manager/0.log" Feb 19 03:17:17.239011 master-0 kubenswrapper[7776]: I0219 03:17:17.238962 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-sbzsk_c50a2aec-7ed0-4114-8b25-19579fe931cb/catalog-operator/0.log" Feb 19 03:17:17.431761 master-0 kubenswrapper[7776]: I0219 03:17:17.431699 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29524515-txbbt_e08a5432-b9f1-4b15-84c4-df9d6276a414/collect-profiles/0.log" Feb 19 03:17:17.643203 master-0 kubenswrapper[7776]: I0219 03:17:17.643111 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-kk77t_b283bd8e-3339-4701-ae3c-f009e498b7d4/olm-operator/0.log" Feb 19 03:17:17.764423 master-0 kubenswrapper[7776]: I0219 03:17:17.764249 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:17.764423 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:17.764423 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:17.764423 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:17.764423 master-0 kubenswrapper[7776]: I0219 03:17:17.764339 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:18.039126 master-0 kubenswrapper[7776]: I0219 03:17:18.039018 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/0.log" Feb 19 03:17:18.233100 master-0 kubenswrapper[7776]: I0219 03:17:18.233008 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/kube-rbac-proxy/0.log" Feb 19 03:17:18.249404 master-0 kubenswrapper[7776]: E0219 03:17:18.249331 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:18.251209 master-0 kubenswrapper[7776]: E0219 03:17:18.251120 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:18.253963 master-0 kubenswrapper[7776]: E0219 03:17:18.253890 7776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 03:17:18.253963 master-0 kubenswrapper[7776]: E0219 03:17:18.253940 7776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:17:18.432936 master-0 kubenswrapper[7776]: I0219 03:17:18.432853 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:17:18.637514 master-0 kubenswrapper[7776]: I0219 03:17:18.637434 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7d77f88776-s4jxm_2576028c-40d8-4ef4-ba41-a5aff01f2ed3/packageserver/0.log" Feb 19 03:17:18.764532 master-0 kubenswrapper[7776]: I0219 03:17:18.764365 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:18.764532 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:18.764532 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:18.764532 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:18.764532 master-0 kubenswrapper[7776]: I0219 03:17:18.764441 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:19.762851 master-0 kubenswrapper[7776]: I0219 03:17:19.762797 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:19.762851 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:19.762851 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:19.762851 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:19.763466 master-0 kubenswrapper[7776]: I0219 03:17:19.762859 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:20.763422 master-0 kubenswrapper[7776]: I0219 03:17:20.763351 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:20.763422 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:20.763422 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:20.763422 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:20.764374 master-0 kubenswrapper[7776]: I0219 03:17:20.763446 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:20.843382 master-0 kubenswrapper[7776]: I0219 03:17:20.843329 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:17:20.843604 master-0 kubenswrapper[7776]: E0219 03:17:20.843577 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:17:21.658780 master-0 kubenswrapper[7776]: W0219 03:17:21.658708 7776 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec16b3a_5d5c_46fe_87f0_89f93a2775ed.slice/crio-conmon-03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec16b3a_5d5c_46fe_87f0_89f93a2775ed.slice/crio-conmon-03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328.scope: no such file or directory Feb 19 03:17:21.662363 master-0 kubenswrapper[7776]: W0219 03:17:21.661599 7776 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec16b3a_5d5c_46fe_87f0_89f93a2775ed.slice/crio-03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec16b3a_5d5c_46fe_87f0_89f93a2775ed.slice/crio-03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328.scope: no such file or directory Feb 19 03:17:21.710914 master-0 kubenswrapper[7776]: W0219 03:17:21.710115 7776 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/systemd-tmpfiles-clean.service: no such file or directory Feb 19 03:17:21.727576 master-0 kubenswrapper[7776]: E0219 03:17:21.715883 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e75f0c1_7a52_4ad6_9b0d_b34ca87c3aa4.slice/crio-1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e75f0c1_7a52_4ad6_9b0d_b34ca87c3aa4.slice/crio-conmon-1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947faa21_7f67_4c7e_abb0_443432f38961.slice/crio-conmon-e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947faa21_7f67_4c7e_abb0_443432f38961.slice/crio-e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:17:21.727576 master-0 kubenswrapper[7776]: E0219 03:17:21.716296 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e75f0c1_7a52_4ad6_9b0d_b34ca87c3aa4.slice/crio-1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e75f0c1_7a52_4ad6_9b0d_b34ca87c3aa4.slice/crio-conmon-1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947faa21_7f67_4c7e_abb0_443432f38961.slice/crio-e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:17:21.727576 master-0 kubenswrapper[7776]: E0219 03:17:21.718391 7776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947faa21_7f67_4c7e_abb0_443432f38961.slice/crio-e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:17:21.739675 master-0 kubenswrapper[7776]: I0219 03:17:21.739630 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9bq57_5cdccda9-48ed-4823-a717-99dd1716383a/kube-multus-additional-cni-plugins/0.log" Feb 19 03:17:21.740146 master-0 kubenswrapper[7776]: I0219 03:17:21.739711 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:17:21.779871 master-0 kubenswrapper[7776]: I0219 03:17:21.779782 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:21.779871 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:21.779871 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:21.779871 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:21.780449 master-0 kubenswrapper[7776]: I0219 03:17:21.779877 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:21.801577 master-0 kubenswrapper[7776]: I0219 03:17:21.801536 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir\") pod \"5cdccda9-48ed-4823-a717-99dd1716383a\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " Feb 19 03:17:21.801809 master-0 kubenswrapper[7776]: I0219 03:17:21.801626 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready\") pod \"5cdccda9-48ed-4823-a717-99dd1716383a\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " Feb 19 03:17:21.801809 master-0 kubenswrapper[7776]: I0219 03:17:21.801647 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist\") pod \"5cdccda9-48ed-4823-a717-99dd1716383a\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " Feb 19 03:17:21.801809 master-0 kubenswrapper[7776]: I0219 03:17:21.801682 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkddk\" (UniqueName: \"kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk\") pod \"5cdccda9-48ed-4823-a717-99dd1716383a\" (UID: \"5cdccda9-48ed-4823-a717-99dd1716383a\") " Feb 19 03:17:21.801809 master-0 kubenswrapper[7776]: I0219 03:17:21.801687 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "5cdccda9-48ed-4823-a717-99dd1716383a" (UID: "5cdccda9-48ed-4823-a717-99dd1716383a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:21.802025 master-0 kubenswrapper[7776]: I0219 03:17:21.801996 7776 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cdccda9-48ed-4823-a717-99dd1716383a-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:21.802135 master-0 kubenswrapper[7776]: I0219 03:17:21.802108 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready" (OuterVolumeSpecName: "ready") pod "5cdccda9-48ed-4823-a717-99dd1716383a" (UID: "5cdccda9-48ed-4823-a717-99dd1716383a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:17:21.802213 master-0 kubenswrapper[7776]: I0219 03:17:21.802180 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "5cdccda9-48ed-4823-a717-99dd1716383a" (UID: "5cdccda9-48ed-4823-a717-99dd1716383a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:17:21.804787 master-0 kubenswrapper[7776]: I0219 03:17:21.804752 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk" (OuterVolumeSpecName: "kube-api-access-fkddk") pod "5cdccda9-48ed-4823-a717-99dd1716383a" (UID: "5cdccda9-48ed-4823-a717-99dd1716383a"). InnerVolumeSpecName "kube-api-access-fkddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:17:21.889813 master-0 kubenswrapper[7776]: I0219 03:17:21.889774 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9bq57_5cdccda9-48ed-4823-a717-99dd1716383a/kube-multus-additional-cni-plugins/0.log" Feb 19 03:17:21.890038 master-0 kubenswrapper[7776]: I0219 03:17:21.889840 7776 generic.go:334] "Generic (PLEG): container finished" podID="5cdccda9-48ed-4823-a717-99dd1716383a" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" exitCode=137 Feb 19 03:17:21.890038 master-0 kubenswrapper[7776]: I0219 03:17:21.889878 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" event={"ID":"5cdccda9-48ed-4823-a717-99dd1716383a","Type":"ContainerDied","Data":"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8"} Feb 19 03:17:21.890038 master-0 kubenswrapper[7776]: I0219 03:17:21.889924 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" event={"ID":"5cdccda9-48ed-4823-a717-99dd1716383a","Type":"ContainerDied","Data":"1d9b2d562ca318ca7aa1397a7e55c515f0bc118aea8c40c8a869a1845dea2184"} Feb 19 03:17:21.890038 master-0 kubenswrapper[7776]: I0219 03:17:21.889932 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9bq57" Feb 19 03:17:21.890211 master-0 kubenswrapper[7776]: I0219 03:17:21.889948 7776 scope.go:117] "RemoveContainer" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" Feb 19 03:17:21.905413 master-0 kubenswrapper[7776]: I0219 03:17:21.905362 7776 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5cdccda9-48ed-4823-a717-99dd1716383a-ready\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:21.905413 master-0 kubenswrapper[7776]: I0219 03:17:21.905408 7776 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5cdccda9-48ed-4823-a717-99dd1716383a-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:21.905413 master-0 kubenswrapper[7776]: I0219 03:17:21.905424 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkddk\" (UniqueName: \"kubernetes.io/projected/5cdccda9-48ed-4823-a717-99dd1716383a-kube-api-access-fkddk\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:21.914143 master-0 kubenswrapper[7776]: I0219 03:17:21.913200 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bq57"] Feb 19 03:17:21.914143 master-0 kubenswrapper[7776]: I0219 03:17:21.913397 7776 scope.go:117] "RemoveContainer" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" Feb 19 03:17:21.914374 master-0 kubenswrapper[7776]: E0219 03:17:21.914318 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8\": container with ID starting with bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8 not found: ID does not exist" containerID="bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8" Feb 19 03:17:21.914427 master-0 kubenswrapper[7776]: I0219 03:17:21.914386 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8"} err="failed to get container status \"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8\": rpc error: code = NotFound desc = could not find container \"bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8\": container with ID starting with bb55801890264f71dedb97eba444ab48b410bfd4f5e74fe62541fc621cf24ff8 not found: ID does not exist" Feb 19 03:17:21.919683 master-0 kubenswrapper[7776]: I0219 03:17:21.919641 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9bq57"] Feb 19 03:17:22.536105 master-0 kubenswrapper[7776]: I0219 03:17:22.536006 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:22.536105 master-0 kubenswrapper[7776]: I0219 03:17:22.536107 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:22.763850 master-0 kubenswrapper[7776]: I0219 03:17:22.763766 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:22.763850 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:22.763850 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:22.763850 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:22.764197 master-0 kubenswrapper[7776]: I0219 03:17:22.763888 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:23.501534 master-0 kubenswrapper[7776]: I0219 03:17:23.501449 7776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 19 03:17:23.502120 master-0 kubenswrapper[7776]: I0219 03:17:23.501857 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" gracePeriod=30 Feb 19 03:17:23.502120 master-0 kubenswrapper[7776]: I0219 03:17:23.502010 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" gracePeriod=30 Feb 19 03:17:23.503939 master-0 kubenswrapper[7776]: I0219 03:17:23.503872 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:17:23.504450 master-0 kubenswrapper[7776]: E0219 03:17:23.504398 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.504450 master-0 kubenswrapper[7776]: I0219 03:17:23.504443 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: E0219 03:17:23.504488 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: I0219 03:17:23.504507 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: E0219 03:17:23.504531 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: I0219 03:17:23.504549 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: E0219 03:17:23.504570 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504611 master-0 kubenswrapper[7776]: I0219 03:17:23.504587 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: E0219 03:17:23.504624 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: I0219 03:17:23.504642 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: E0219 03:17:23.504673 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: I0219 03:17:23.504690 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: E0219 03:17:23.504730 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:17:23.504936 master-0 kubenswrapper[7776]: I0219 03:17:23.504748 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:17:23.505291 master-0 kubenswrapper[7776]: I0219 03:17:23.505021 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.505291 master-0 kubenswrapper[7776]: I0219 03:17:23.505045 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.505291 master-0 kubenswrapper[7776]: I0219 03:17:23.505062 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.505291 master-0 kubenswrapper[7776]: I0219 03:17:23.505079 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" containerName="kube-multus-additional-cni-plugins" Feb 19 03:17:23.505291 master-0 kubenswrapper[7776]: I0219 03:17:23.505105 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.505557 master-0 kubenswrapper[7776]: E0219 03:17:23.505351 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.505557 master-0 kubenswrapper[7776]: I0219 03:17:23.505369 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.505788 master-0 kubenswrapper[7776]: I0219 03:17:23.505740 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 19 03:17:23.505788 master-0 kubenswrapper[7776]: I0219 03:17:23.505783 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.505913 master-0 kubenswrapper[7776]: I0219 03:17:23.505818 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 19 03:17:23.507836 master-0 kubenswrapper[7776]: I0219 03:17:23.507765 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.531727 master-0 kubenswrapper[7776]: I0219 03:17:23.531633 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.531958 master-0 kubenswrapper[7776]: I0219 03:17:23.531929 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.633740 master-0 kubenswrapper[7776]: I0219 03:17:23.633685 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.633864 master-0 kubenswrapper[7776]: I0219 03:17:23.633824 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.633922 master-0 kubenswrapper[7776]: I0219 03:17:23.633896 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.634064 master-0 kubenswrapper[7776]: I0219 03:17:23.634032 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.675868 master-0 kubenswrapper[7776]: I0219 03:17:23.675798 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:17:23.688129 master-0 kubenswrapper[7776]: I0219 03:17:23.687897 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:17:23.688347 master-0 kubenswrapper[7776]: I0219 03:17:23.688196 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:23.710023 master-0 kubenswrapper[7776]: I0219 03:17:23.709968 7776 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="161820cf-ccaf-499f-bbb7-a4a0c3f0f809" Feb 19 03:17:23.716752 master-0 kubenswrapper[7776]: W0219 03:17:23.716687 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50eac3d8c63234f2a49e98044c0d4f67.slice/crio-5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f WatchSource:0}: Error finding container 5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f: Status 404 returned error can't find the container with id 5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f Feb 19 03:17:23.735246 master-0 kubenswrapper[7776]: I0219 03:17:23.735200 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 19 03:17:23.735391 master-0 kubenswrapper[7776]: I0219 03:17:23.735295 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 19 03:17:23.735391 master-0 kubenswrapper[7776]: I0219 03:17:23.735343 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 19 03:17:23.735460 master-0 kubenswrapper[7776]: I0219 03:17:23.735333 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:23.735492 master-0 kubenswrapper[7776]: I0219 03:17:23.735362 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:23.735492 master-0 kubenswrapper[7776]: I0219 03:17:23.735465 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:23.735492 master-0 kubenswrapper[7776]: I0219 03:17:23.735441 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 19 03:17:23.735575 master-0 kubenswrapper[7776]: I0219 03:17:23.735422 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:23.735575 master-0 kubenswrapper[7776]: I0219 03:17:23.735528 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 19 03:17:23.735683 master-0 kubenswrapper[7776]: I0219 03:17:23.735650 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:23.735817 master-0 kubenswrapper[7776]: I0219 03:17:23.735791 7776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:23.735817 master-0 kubenswrapper[7776]: I0219 03:17:23.735811 7776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:23.735880 master-0 kubenswrapper[7776]: I0219 03:17:23.735820 7776 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:23.735880 master-0 kubenswrapper[7776]: I0219 03:17:23.735829 7776 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:23.735880 master-0 kubenswrapper[7776]: I0219 03:17:23.735837 7776 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:23.762972 master-0 kubenswrapper[7776]: I0219 03:17:23.762860 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:23.762972 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:23.762972 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:23.762972 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:23.762972 master-0 kubenswrapper[7776]: I0219 03:17:23.762919 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:23.854027 master-0 kubenswrapper[7776]: I0219 03:17:23.853917 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdccda9-48ed-4823-a717-99dd1716383a" path="/var/lib/kubelet/pods/5cdccda9-48ed-4823-a717-99dd1716383a/volumes" Feb 19 03:17:23.854373 master-0 kubenswrapper[7776]: I0219 03:17:23.854336 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 19 03:17:23.854760 master-0 kubenswrapper[7776]: I0219 03:17:23.854722 7776 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 19 03:17:23.894137 master-0 kubenswrapper[7776]: I0219 03:17:23.889144 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 19 03:17:23.894137 master-0 kubenswrapper[7776]: I0219 03:17:23.889191 7776 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="161820cf-ccaf-499f-bbb7-a4a0c3f0f809" Feb 19 03:17:23.900638 master-0 kubenswrapper[7776]: I0219 03:17:23.900590 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 19 03:17:23.900766 master-0 kubenswrapper[7776]: I0219 03:17:23.900636 7776 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="161820cf-ccaf-499f-bbb7-a4a0c3f0f809" Feb 19 03:17:23.905424 master-0 kubenswrapper[7776]: I0219 03:17:23.904637 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f"} Feb 19 03:17:23.909031 master-0 kubenswrapper[7776]: I0219 03:17:23.909000 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" exitCode=0 Feb 19 03:17:23.909110 master-0 kubenswrapper[7776]: I0219 03:17:23.909033 7776 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" exitCode=0 Feb 19 03:17:23.909110 master-0 kubenswrapper[7776]: I0219 03:17:23.909094 7776 scope.go:117] "RemoveContainer" containerID="f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" Feb 19 03:17:23.909242 master-0 kubenswrapper[7776]: I0219 03:17:23.909229 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 19 03:17:23.918520 master-0 kubenswrapper[7776]: I0219 03:17:23.914083 7776 generic.go:334] "Generic (PLEG): container finished" podID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerID="13f1d80c6e6d45699a9dea951ab1e9a8aa64be91ab5359ccb9eae52f989fd916" exitCode=0 Feb 19 03:17:23.918520 master-0 kubenswrapper[7776]: I0219 03:17:23.914143 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf","Type":"ContainerDied","Data":"13f1d80c6e6d45699a9dea951ab1e9a8aa64be91ab5359ccb9eae52f989fd916"} Feb 19 03:17:23.936488 master-0 kubenswrapper[7776]: I0219 03:17:23.936445 7776 scope.go:117] "RemoveContainer" containerID="7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" Feb 19 03:17:23.956846 master-0 kubenswrapper[7776]: I0219 03:17:23.955508 7776 scope.go:117] "RemoveContainer" containerID="17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" Feb 19 03:17:23.981945 master-0 kubenswrapper[7776]: I0219 03:17:23.981890 7776 scope.go:117] "RemoveContainer" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" Feb 19 03:17:24.003056 master-0 kubenswrapper[7776]: I0219 03:17:24.003016 7776 scope.go:117] "RemoveContainer" containerID="f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" Feb 19 03:17:24.003638 master-0 kubenswrapper[7776]: E0219 03:17:24.003597 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2\": container with ID starting with f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2 not found: ID does not exist" containerID="f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" Feb 19 03:17:24.003725 master-0 kubenswrapper[7776]: I0219 03:17:24.003661 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2"} err="failed to get container status \"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2\": rpc error: code = NotFound desc = could not find container \"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2\": container with ID starting with f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2 not found: ID does not exist" Feb 19 03:17:24.003725 master-0 kubenswrapper[7776]: I0219 03:17:24.003695 7776 scope.go:117] "RemoveContainer" containerID="7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" Feb 19 03:17:24.004142 master-0 kubenswrapper[7776]: E0219 03:17:24.004098 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc\": container with ID starting with 7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc not found: ID does not exist" containerID="7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" Feb 19 03:17:24.004142 master-0 kubenswrapper[7776]: I0219 03:17:24.004133 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc"} err="failed to get container status \"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc\": rpc error: code = NotFound desc = could not find container \"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc\": container with ID starting with 7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc not found: ID does not exist" Feb 19 03:17:24.004142 master-0 kubenswrapper[7776]: I0219 03:17:24.004147 7776 scope.go:117] "RemoveContainer" containerID="17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" Feb 19 03:17:24.004672 master-0 kubenswrapper[7776]: E0219 03:17:24.004637 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9\": container with ID starting with 17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9 not found: ID does not exist" containerID="17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" Feb 19 03:17:24.004752 master-0 kubenswrapper[7776]: I0219 03:17:24.004683 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9"} err="failed to get container status \"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9\": rpc error: code = NotFound desc = could not find container \"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9\": container with ID starting with 17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9 not found: ID does not exist" Feb 19 03:17:24.004752 master-0 kubenswrapper[7776]: I0219 03:17:24.004700 7776 scope.go:117] "RemoveContainer" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" Feb 19 03:17:24.005411 master-0 kubenswrapper[7776]: E0219 03:17:24.005369 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a\": container with ID starting with 6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a not found: ID does not exist" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" Feb 19 03:17:24.005499 master-0 kubenswrapper[7776]: I0219 03:17:24.005416 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a"} err="failed to get container status \"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a\": rpc error: code = NotFound desc = could not find container \"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a\": container with ID starting with 6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a not found: ID does not exist" Feb 19 03:17:24.005499 master-0 kubenswrapper[7776]: I0219 03:17:24.005445 7776 scope.go:117] "RemoveContainer" containerID="f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2" Feb 19 03:17:24.006564 master-0 kubenswrapper[7776]: I0219 03:17:24.006506 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2"} err="failed to get container status \"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2\": rpc error: code = NotFound desc = could not find container \"f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2\": container with ID starting with f3aff87ddc1ce78f463b9bf997824b4b96ab5281b112df2f1f52b49c8a7196c2 not found: ID does not exist" Feb 19 03:17:24.006673 master-0 kubenswrapper[7776]: I0219 03:17:24.006568 7776 scope.go:117] "RemoveContainer" containerID="7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc" Feb 19 03:17:24.007578 master-0 kubenswrapper[7776]: I0219 03:17:24.007529 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc"} err="failed to get container status \"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc\": rpc error: code = NotFound desc = could not find container \"7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc\": container with ID starting with 7288925e11aeb25bbb335f078c9e31792f6d4f0af89b33f7a655b0486d5cedcc not found: ID does not exist" Feb 19 03:17:24.007578 master-0 kubenswrapper[7776]: I0219 03:17:24.007575 7776 scope.go:117] "RemoveContainer" containerID="17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9" Feb 19 03:17:24.007943 master-0 kubenswrapper[7776]: I0219 03:17:24.007913 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9"} err="failed to get container status \"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9\": rpc error: code = NotFound desc = could not find container \"17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9\": container with ID starting with 17f060b64624663306aaff9ca8780e9e0cf400ba4cb7bb72c95042efa65abad9 not found: ID does not exist" Feb 19 03:17:24.008017 master-0 kubenswrapper[7776]: I0219 03:17:24.007953 7776 scope.go:117] "RemoveContainer" containerID="6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a" Feb 19 03:17:24.008514 master-0 kubenswrapper[7776]: I0219 03:17:24.008488 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a"} err="failed to get container status \"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a\": rpc error: code = NotFound desc = could not find container \"6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a\": container with ID starting with 6e12c2dd8c433a964ba0631c8a75130e2fb3658b0e63e79670e5c1aab6fbdf1a not found: ID does not exist" Feb 19 03:17:24.763338 master-0 kubenswrapper[7776]: I0219 03:17:24.763287 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:24.763338 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:24.763338 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:24.763338 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:24.763837 master-0 kubenswrapper[7776]: I0219 03:17:24.763358 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:24.925784 master-0 kubenswrapper[7776]: I0219 03:17:24.925704 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} Feb 19 03:17:24.925784 master-0 kubenswrapper[7776]: I0219 03:17:24.925758 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} Feb 19 03:17:24.925784 master-0 kubenswrapper[7776]: I0219 03:17:24.925776 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"52f129c7009e6597cab7613e274a5e92bff18227b925d3ec2d217acbeb4c8d74"} Feb 19 03:17:24.925784 master-0 kubenswrapper[7776]: I0219 03:17:24.925788 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"6e39b4ae8e2c1020e55e9a8991002fceb2451697ce51c87e07c50c9ac50db7bc"} Feb 19 03:17:24.950496 master-0 kubenswrapper[7776]: I0219 03:17:24.950338 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.9503176450000002 podStartE2EDuration="1.950317645s" podCreationTimestamp="2026-02-19 03:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:24.949859222 +0000 UTC m=+751.289543750" watchObservedRunningTime="2026-02-19 03:17:24.950317645 +0000 UTC m=+751.290002163" Feb 19 03:17:25.266548 master-0 kubenswrapper[7776]: I0219 03:17:25.266505 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:17:25.363548 master-0 kubenswrapper[7776]: I0219 03:17:25.362936 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access\") pod \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " Feb 19 03:17:25.363548 master-0 kubenswrapper[7776]: I0219 03:17:25.363115 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock\") pod \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " Feb 19 03:17:25.363548 master-0 kubenswrapper[7776]: I0219 03:17:25.363173 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir\") pod \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\" (UID: \"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf\") " Feb 19 03:17:25.363548 master-0 kubenswrapper[7776]: I0219 03:17:25.363513 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock" (OuterVolumeSpecName: "var-lock") pod "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" (UID: "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:25.364027 master-0 kubenswrapper[7776]: I0219 03:17:25.363606 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" (UID: "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:25.364027 master-0 kubenswrapper[7776]: I0219 03:17:25.363879 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:25.364027 master-0 kubenswrapper[7776]: I0219 03:17:25.363904 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:25.366791 master-0 kubenswrapper[7776]: I0219 03:17:25.366716 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" (UID: "f2d9bbbb-77bd-4978-9f37-d3c54b780fbf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:17:25.465932 master-0 kubenswrapper[7776]: I0219 03:17:25.465836 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d9bbbb-77bd-4978-9f37-d3c54b780fbf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:25.763583 master-0 kubenswrapper[7776]: I0219 03:17:25.763535 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:25.763583 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:25.763583 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:25.763583 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:25.764275 master-0 kubenswrapper[7776]: I0219 03:17:25.763658 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:25.938518 master-0 kubenswrapper[7776]: I0219 03:17:25.938438 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"f2d9bbbb-77bd-4978-9f37-d3c54b780fbf","Type":"ContainerDied","Data":"43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f"} Feb 19 03:17:25.938518 master-0 kubenswrapper[7776]: I0219 03:17:25.938500 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f" Feb 19 03:17:25.938518 master-0 kubenswrapper[7776]: I0219 03:17:25.938457 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:17:26.764072 master-0 kubenswrapper[7776]: I0219 03:17:26.763984 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:26.764072 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:26.764072 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:26.764072 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:26.765039 master-0 kubenswrapper[7776]: I0219 03:17:26.764080 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:27.764303 master-0 kubenswrapper[7776]: I0219 03:17:27.764205 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:17:27.764303 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:17:27.764303 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:17:27.764303 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:17:27.764303 master-0 kubenswrapper[7776]: I0219 03:17:27.764282 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:17:27.765469 master-0 kubenswrapper[7776]: I0219 03:17:27.764339 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:17:27.765469 master-0 kubenswrapper[7776]: I0219 03:17:27.764940 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd"} pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" containerMessage="Container router failed startup probe, will be restarted" Feb 19 03:17:27.765469 master-0 kubenswrapper[7776]: I0219 03:17:27.764984 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" containerID="cri-o://a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd" gracePeriod=3600 Feb 19 03:17:29.967808 master-0 kubenswrapper[7776]: I0219 03:17:29.967739 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-q8pfv_947faa21-7f67-4c7e-abb0-443432f38961/multus-admission-controller/0.log" Feb 19 03:17:29.967808 master-0 kubenswrapper[7776]: I0219 03:17:29.967793 7776 generic.go:334] "Generic (PLEG): container finished" podID="947faa21-7f67-4c7e-abb0-443432f38961" containerID="7779bc9360a96d18f167a4e3e0b6db49a68f34d021af87222f6e2c102a74d376" exitCode=137 Feb 19 03:17:29.967808 master-0 kubenswrapper[7776]: I0219 03:17:29.967822 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerDied","Data":"7779bc9360a96d18f167a4e3e0b6db49a68f34d021af87222f6e2c102a74d376"} Feb 19 03:17:30.672183 master-0 kubenswrapper[7776]: I0219 03:17:30.672141 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-q8pfv_947faa21-7f67-4c7e-abb0-443432f38961/multus-admission-controller/0.log" Feb 19 03:17:30.672390 master-0 kubenswrapper[7776]: I0219 03:17:30.672224 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:17:30.754571 master-0 kubenswrapper[7776]: I0219 03:17:30.754476 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") pod \"947faa21-7f67-4c7e-abb0-443432f38961\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " Feb 19 03:17:30.754814 master-0 kubenswrapper[7776]: I0219 03:17:30.754625 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") pod \"947faa21-7f67-4c7e-abb0-443432f38961\" (UID: \"947faa21-7f67-4c7e-abb0-443432f38961\") " Feb 19 03:17:30.758344 master-0 kubenswrapper[7776]: I0219 03:17:30.758268 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7" (OuterVolumeSpecName: "kube-api-access-jl7k7") pod "947faa21-7f67-4c7e-abb0-443432f38961" (UID: "947faa21-7f67-4c7e-abb0-443432f38961"). InnerVolumeSpecName "kube-api-access-jl7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:17:30.760377 master-0 kubenswrapper[7776]: I0219 03:17:30.760306 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "947faa21-7f67-4c7e-abb0-443432f38961" (UID: "947faa21-7f67-4c7e-abb0-443432f38961"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:17:30.858354 master-0 kubenswrapper[7776]: I0219 03:17:30.857566 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl7k7\" (UniqueName: \"kubernetes.io/projected/947faa21-7f67-4c7e-abb0-443432f38961-kube-api-access-jl7k7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:30.858354 master-0 kubenswrapper[7776]: I0219 03:17:30.857622 7776 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/947faa21-7f67-4c7e-abb0-443432f38961-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:30.977547 master-0 kubenswrapper[7776]: I0219 03:17:30.977441 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-q8pfv_947faa21-7f67-4c7e-abb0-443432f38961/multus-admission-controller/0.log" Feb 19 03:17:30.978143 master-0 kubenswrapper[7776]: I0219 03:17:30.978115 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" event={"ID":"947faa21-7f67-4c7e-abb0-443432f38961","Type":"ContainerDied","Data":"92da4e2c41faed23ae9536b6cf450fa8714135f86f0f23ad77b009821e031601"} Feb 19 03:17:30.978250 master-0 kubenswrapper[7776]: I0219 03:17:30.978207 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv" Feb 19 03:17:30.978365 master-0 kubenswrapper[7776]: I0219 03:17:30.978346 7776 scope.go:117] "RemoveContainer" containerID="e9143bad584a01b8037b50bf9ae64c2f6ebd210d85d1e8c74f1189744a7dd59c" Feb 19 03:17:31.000970 master-0 kubenswrapper[7776]: I0219 03:17:31.000505 7776 scope.go:117] "RemoveContainer" containerID="7779bc9360a96d18f167a4e3e0b6db49a68f34d021af87222f6e2c102a74d376" Feb 19 03:17:31.050078 master-0 kubenswrapper[7776]: I0219 03:17:31.049999 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:17:31.057634 master-0 kubenswrapper[7776]: I0219 03:17:31.057570 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv"] Feb 19 03:17:31.566419 master-0 kubenswrapper[7776]: I0219 03:17:31.566360 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 19 03:17:31.566696 master-0 kubenswrapper[7776]: E0219 03:17:31.566668 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:17:31.566696 master-0 kubenswrapper[7776]: I0219 03:17:31.566691 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:17:31.566774 master-0 kubenswrapper[7776]: E0219 03:17:31.566709 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="multus-admission-controller" Feb 19 03:17:31.566774 master-0 kubenswrapper[7776]: I0219 03:17:31.566718 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="multus-admission-controller" Feb 19 03:17:31.566774 master-0 kubenswrapper[7776]: E0219 03:17:31.566737 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="kube-rbac-proxy" Feb 19 03:17:31.566774 master-0 kubenswrapper[7776]: I0219 03:17:31.566745 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="kube-rbac-proxy" Feb 19 03:17:31.566888 master-0 kubenswrapper[7776]: I0219 03:17:31.566871 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="multus-admission-controller" Feb 19 03:17:31.566920 master-0 kubenswrapper[7776]: I0219 03:17:31.566889 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:17:31.566920 master-0 kubenswrapper[7776]: I0219 03:17:31.566900 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="947faa21-7f67-4c7e-abb0-443432f38961" containerName="kube-rbac-proxy" Feb 19 03:17:31.567463 master-0 kubenswrapper[7776]: I0219 03:17:31.567429 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.569829 master-0 kubenswrapper[7776]: I0219 03:17:31.569798 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 19 03:17:31.569903 master-0 kubenswrapper[7776]: I0219 03:17:31.569877 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-mbcxl" Feb 19 03:17:31.582815 master-0 kubenswrapper[7776]: I0219 03:17:31.582747 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 19 03:17:31.669479 master-0 kubenswrapper[7776]: I0219 03:17:31.669410 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.669479 master-0 kubenswrapper[7776]: I0219 03:17:31.669473 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.669732 master-0 kubenswrapper[7776]: I0219 03:17:31.669507 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.770487 master-0 kubenswrapper[7776]: I0219 03:17:31.770407 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.770487 master-0 kubenswrapper[7776]: I0219 03:17:31.770482 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.771364 master-0 kubenswrapper[7776]: I0219 03:17:31.770625 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.771364 master-0 kubenswrapper[7776]: I0219 03:17:31.770697 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.771364 master-0 kubenswrapper[7776]: I0219 03:17:31.770777 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.786497 master-0 kubenswrapper[7776]: I0219 03:17:31.786390 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:31.842540 master-0 kubenswrapper[7776]: I0219 03:17:31.842361 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:17:31.842905 master-0 kubenswrapper[7776]: E0219 03:17:31.842637 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:17:31.855907 master-0 kubenswrapper[7776]: I0219 03:17:31.855850 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947faa21-7f67-4c7e-abb0-443432f38961" path="/var/lib/kubelet/pods/947faa21-7f67-4c7e-abb0-443432f38961/volumes" Feb 19 03:17:31.889269 master-0 kubenswrapper[7776]: I0219 03:17:31.889179 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 19 03:17:32.284270 master-0 kubenswrapper[7776]: I0219 03:17:32.284191 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 19 03:17:32.295696 master-0 kubenswrapper[7776]: W0219 03:17:32.295623 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod60ce7e75_5190_49a1_b1b7_b3adf0bdf2e3.slice/crio-951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba WatchSource:0}: Error finding container 951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba: Status 404 returned error can't find the container with id 951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba Feb 19 03:17:32.923684 master-0 kubenswrapper[7776]: I0219 03:17:32.923505 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:17:32.924475 master-0 kubenswrapper[7776]: I0219 03:17:32.924435 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:32.929135 master-0 kubenswrapper[7776]: I0219 03:17:32.928856 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 03:17:32.932639 master-0 kubenswrapper[7776]: I0219 03:17:32.932590 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-l5ps6" Feb 19 03:17:32.944303 master-0 kubenswrapper[7776]: I0219 03:17:32.944206 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:17:32.988358 master-0 kubenswrapper[7776]: I0219 03:17:32.988296 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:32.988577 master-0 kubenswrapper[7776]: I0219 03:17:32.988428 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:32.988577 master-0 kubenswrapper[7776]: I0219 03:17:32.988485 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:32.994715 master-0 kubenswrapper[7776]: I0219 03:17:32.994649 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3","Type":"ContainerStarted","Data":"21e26a22b1efe279782f76fa7cfe3a983a36a3e7247df0cc7bcc0fa254258e19"} Feb 19 03:17:32.994715 master-0 kubenswrapper[7776]: I0219 03:17:32.994712 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3","Type":"ContainerStarted","Data":"951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba"} Feb 19 03:17:33.012374 master-0 kubenswrapper[7776]: I0219 03:17:33.012310 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.012292515 podStartE2EDuration="2.012292515s" podCreationTimestamp="2026-02-19 03:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:33.010745001 +0000 UTC m=+759.350429539" watchObservedRunningTime="2026-02-19 03:17:33.012292515 +0000 UTC m=+759.351977033" Feb 19 03:17:33.089818 master-0 kubenswrapper[7776]: I0219 03:17:33.089736 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.090041 master-0 kubenswrapper[7776]: I0219 03:17:33.089902 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.090094 master-0 kubenswrapper[7776]: I0219 03:17:33.090072 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.090216 master-0 kubenswrapper[7776]: I0219 03:17:33.090195 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.090579 master-0 kubenswrapper[7776]: I0219 03:17:33.090458 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.115347 master-0 kubenswrapper[7776]: I0219 03:17:33.115321 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.263000 master-0 kubenswrapper[7776]: I0219 03:17:33.262884 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:17:33.411444 master-0 kubenswrapper[7776]: I0219 03:17:33.411370 7776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 19 03:17:33.411444 master-0 kubenswrapper[7776]: I0219 03:17:33.411439 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:17:33.411878 master-0 kubenswrapper[7776]: E0219 03:17:33.411771 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.411878 master-0 kubenswrapper[7776]: I0219 03:17:33.411791 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.411878 master-0 kubenswrapper[7776]: E0219 03:17:33.411809 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.411878 master-0 kubenswrapper[7776]: I0219 03:17:33.411817 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.412042 master-0 kubenswrapper[7776]: I0219 03:17:33.412010 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.412042 master-0 kubenswrapper[7776]: I0219 03:17:33.412035 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 19 03:17:33.412757 master-0 kubenswrapper[7776]: I0219 03:17:33.412376 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd" gracePeriod=30 Feb 19 03:17:33.417749 master-0 kubenswrapper[7776]: I0219 03:17:33.413500 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.496720 master-0 kubenswrapper[7776]: I0219 03:17:33.496615 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.496720 master-0 kubenswrapper[7776]: I0219 03:17:33.496669 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.539842 master-0 kubenswrapper[7776]: I0219 03:17:33.539772 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:17:33.588245 master-0 kubenswrapper[7776]: I0219 03:17:33.588204 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:17:33.598523 master-0 kubenswrapper[7776]: I0219 03:17:33.598457 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.598523 master-0 kubenswrapper[7776]: I0219 03:17:33.598511 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.598848 master-0 kubenswrapper[7776]: I0219 03:17:33.598664 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.598848 master-0 kubenswrapper[7776]: I0219 03:17:33.598736 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.606443 master-0 kubenswrapper[7776]: I0219 03:17:33.606381 7776 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="eed0456d-28e0-4892-a243-78c0d5dd0610" Feb 19 03:17:33.689423 master-0 kubenswrapper[7776]: I0219 03:17:33.689364 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.689423 master-0 kubenswrapper[7776]: I0219 03:17:33.689424 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.689958 master-0 kubenswrapper[7776]: I0219 03:17:33.689931 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.690118 master-0 kubenswrapper[7776]: I0219 03:17:33.690102 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.694005 master-0 kubenswrapper[7776]: I0219 03:17:33.693782 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.694354 master-0 kubenswrapper[7776]: I0219 03:17:33.694333 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:33.699313 master-0 kubenswrapper[7776]: I0219 03:17:33.699250 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 19 03:17:33.699381 master-0 kubenswrapper[7776]: I0219 03:17:33.699362 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 19 03:17:33.699791 master-0 kubenswrapper[7776]: I0219 03:17:33.699768 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:33.704335 master-0 kubenswrapper[7776]: I0219 03:17:33.699906 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:33.704335 master-0 kubenswrapper[7776]: I0219 03:17:33.701166 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:17:33.708286 master-0 kubenswrapper[7776]: W0219 03:17:33.708225 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4aef097d_bea5_404d_b26b_aed9142ddf14.slice/crio-e99d11f1b7f7b440e7693112746ef8c230d71c911d07941ae7bb0938acb8a034 WatchSource:0}: Error finding container e99d11f1b7f7b440e7693112746ef8c230d71c911d07941ae7bb0938acb8a034: Status 404 returned error can't find the container with id e99d11f1b7f7b440e7693112746ef8c230d71c911d07941ae7bb0938acb8a034 Feb 19 03:17:33.807333 master-0 kubenswrapper[7776]: I0219 03:17:33.807052 7776 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:33.807333 master-0 kubenswrapper[7776]: I0219 03:17:33.807106 7776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:33.838656 master-0 kubenswrapper[7776]: I0219 03:17:33.831604 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:33.865168 master-0 kubenswrapper[7776]: I0219 03:17:33.865094 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 19 03:17:33.865604 master-0 kubenswrapper[7776]: I0219 03:17:33.865454 7776 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 19 03:17:33.896856 master-0 kubenswrapper[7776]: I0219 03:17:33.892692 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 19 03:17:33.896856 master-0 kubenswrapper[7776]: I0219 03:17:33.892731 7776 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="eed0456d-28e0-4892-a243-78c0d5dd0610" Feb 19 03:17:33.896856 master-0 kubenswrapper[7776]: I0219 03:17:33.895937 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 19 03:17:33.896856 master-0 kubenswrapper[7776]: I0219 03:17:33.895991 7776 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="eed0456d-28e0-4892-a243-78c0d5dd0610" Feb 19 03:17:34.004247 master-0 kubenswrapper[7776]: I0219 03:17:34.004173 7776 generic.go:334] "Generic (PLEG): container finished" podID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerID="e1a07313a2933802cf62d384385baaaecb3c372bcb5aabbcc186bb282740e81b" exitCode=0 Feb 19 03:17:34.004482 master-0 kubenswrapper[7776]: I0219 03:17:34.004289 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"402778fb-ac93-4d3a-bc4e-7416c49a4061","Type":"ContainerDied","Data":"e1a07313a2933802cf62d384385baaaecb3c372bcb5aabbcc186bb282740e81b"} Feb 19 03:17:34.005837 master-0 kubenswrapper[7776]: I0219 03:17:34.005774 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43"} Feb 19 03:17:34.007069 master-0 kubenswrapper[7776]: I0219 03:17:34.007035 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4aef097d-bea5-404d-b26b-aed9142ddf14","Type":"ContainerStarted","Data":"e99d11f1b7f7b440e7693112746ef8c230d71c911d07941ae7bb0938acb8a034"} Feb 19 03:17:34.009701 master-0 kubenswrapper[7776]: I0219 03:17:34.009649 7776 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd" exitCode=0 Feb 19 03:17:34.010150 master-0 kubenswrapper[7776]: I0219 03:17:34.010111 7776 scope.go:117] "RemoveContainer" containerID="66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd" Feb 19 03:17:34.010243 master-0 kubenswrapper[7776]: I0219 03:17:34.010231 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 19 03:17:34.014229 master-0 kubenswrapper[7776]: I0219 03:17:34.014183 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:34.016819 master-0 kubenswrapper[7776]: I0219 03:17:34.016490 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:17:34.048719 master-0 kubenswrapper[7776]: I0219 03:17:34.048667 7776 scope.go:117] "RemoveContainer" containerID="c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff" Feb 19 03:17:34.078146 master-0 kubenswrapper[7776]: I0219 03:17:34.077629 7776 scope.go:117] "RemoveContainer" containerID="66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd" Feb 19 03:17:34.078248 master-0 kubenswrapper[7776]: E0219 03:17:34.078185 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd\": container with ID starting with 66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd not found: ID does not exist" containerID="66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd" Feb 19 03:17:34.078248 master-0 kubenswrapper[7776]: I0219 03:17:34.078220 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd"} err="failed to get container status \"66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd\": rpc error: code = NotFound desc = could not find container \"66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd\": container with ID starting with 66f97a9bf9e141e23feeedb30ab447633b69256badde89c081df2f08c950dbfd not found: ID does not exist" Feb 19 03:17:34.078248 master-0 kubenswrapper[7776]: I0219 03:17:34.078241 7776 scope.go:117] "RemoveContainer" containerID="c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff" Feb 19 03:17:34.078643 master-0 kubenswrapper[7776]: E0219 03:17:34.078590 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff\": container with ID starting with c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff not found: ID does not exist" containerID="c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff" Feb 19 03:17:34.078707 master-0 kubenswrapper[7776]: I0219 03:17:34.078642 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff"} err="failed to get container status \"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff\": rpc error: code = NotFound desc = could not find container \"c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff\": container with ID starting with c5c3d1fa02b48421156b365d74d212ad0520e6543ce74c7cab7039f773a737ff not found: ID does not exist" Feb 19 03:17:35.017675 master-0 kubenswrapper[7776]: I0219 03:17:35.017599 7776 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="d4ec4e49d4dd98a02afe5ae82b828a0c598d3a1b8c49a3c9012f434a6bee2385" exitCode=0 Feb 19 03:17:35.018316 master-0 kubenswrapper[7776]: I0219 03:17:35.018290 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"d4ec4e49d4dd98a02afe5ae82b828a0c598d3a1b8c49a3c9012f434a6bee2385"} Feb 19 03:17:35.020561 master-0 kubenswrapper[7776]: I0219 03:17:35.020495 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4aef097d-bea5-404d-b26b-aed9142ddf14","Type":"ContainerStarted","Data":"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37"} Feb 19 03:17:35.072427 master-0 kubenswrapper[7776]: I0219 03:17:35.071185 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=3.071168154 podStartE2EDuration="3.071168154s" podCreationTimestamp="2026-02-19 03:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:35.067370275 +0000 UTC m=+761.407054793" watchObservedRunningTime="2026-02-19 03:17:35.071168154 +0000 UTC m=+761.410852672" Feb 19 03:17:35.338021 master-0 kubenswrapper[7776]: I0219 03:17:35.337964 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:35.430669 master-0 kubenswrapper[7776]: I0219 03:17:35.430616 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock\") pod \"402778fb-ac93-4d3a-bc4e-7416c49a4061\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " Feb 19 03:17:35.430839 master-0 kubenswrapper[7776]: I0219 03:17:35.430731 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir\") pod \"402778fb-ac93-4d3a-bc4e-7416c49a4061\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " Feb 19 03:17:35.430839 master-0 kubenswrapper[7776]: I0219 03:17:35.430727 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock" (OuterVolumeSpecName: "var-lock") pod "402778fb-ac93-4d3a-bc4e-7416c49a4061" (UID: "402778fb-ac93-4d3a-bc4e-7416c49a4061"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:35.430839 master-0 kubenswrapper[7776]: I0219 03:17:35.430776 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access\") pod \"402778fb-ac93-4d3a-bc4e-7416c49a4061\" (UID: \"402778fb-ac93-4d3a-bc4e-7416c49a4061\") " Feb 19 03:17:35.430966 master-0 kubenswrapper[7776]: I0219 03:17:35.430810 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "402778fb-ac93-4d3a-bc4e-7416c49a4061" (UID: "402778fb-ac93-4d3a-bc4e-7416c49a4061"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:17:35.431164 master-0 kubenswrapper[7776]: I0219 03:17:35.431135 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:35.431164 master-0 kubenswrapper[7776]: I0219 03:17:35.431155 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/402778fb-ac93-4d3a-bc4e-7416c49a4061-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:35.433888 master-0 kubenswrapper[7776]: I0219 03:17:35.433829 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "402778fb-ac93-4d3a-bc4e-7416c49a4061" (UID: "402778fb-ac93-4d3a-bc4e-7416c49a4061"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:17:35.533657 master-0 kubenswrapper[7776]: I0219 03:17:35.532897 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/402778fb-ac93-4d3a-bc4e-7416c49a4061-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:17:36.030471 master-0 kubenswrapper[7776]: I0219 03:17:36.030409 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:17:36.031030 master-0 kubenswrapper[7776]: I0219 03:17:36.030406 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"402778fb-ac93-4d3a-bc4e-7416c49a4061","Type":"ContainerDied","Data":"d86702a952f96c82b209454f5a8421f9f15531387895bfc549a591987747f66a"} Feb 19 03:17:36.031030 master-0 kubenswrapper[7776]: I0219 03:17:36.030559 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d86702a952f96c82b209454f5a8421f9f15531387895bfc549a591987747f66a" Feb 19 03:17:36.034771 master-0 kubenswrapper[7776]: I0219 03:17:36.034689 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"0cf7d392da6a301b93f30bcc03748c612e502b9e965838935f8e427396fbdf21"} Feb 19 03:17:36.034771 master-0 kubenswrapper[7776]: I0219 03:17:36.034764 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"d0fbcab1791c1fa93d0b8382e393526b12e53a1efcdb373eae2fce501c101408"} Feb 19 03:17:36.034936 master-0 kubenswrapper[7776]: I0219 03:17:36.034787 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:17:36.034936 master-0 kubenswrapper[7776]: I0219 03:17:36.034802 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327"} Feb 19 03:17:36.057710 master-0 kubenswrapper[7776]: I0219 03:17:36.057603 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.057586484 podStartE2EDuration="3.057586484s" podCreationTimestamp="2026-02-19 03:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:36.054020403 +0000 UTC m=+762.393704931" watchObservedRunningTime="2026-02-19 03:17:36.057586484 +0000 UTC m=+762.397271002" Feb 19 03:17:42.544960 master-0 kubenswrapper[7776]: I0219 03:17:42.544870 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:42.551959 master-0 kubenswrapper[7776]: I0219 03:17:42.551927 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:17:42.842967 master-0 kubenswrapper[7776]: I0219 03:17:42.842857 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:17:42.843167 master-0 kubenswrapper[7776]: E0219 03:17:42.843028 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:17:47.149824 master-0 kubenswrapper[7776]: E0219 03:17:47.149731 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" podUID="33bb562f-84e7-4fcb-b008-416c09a5ecf0" Feb 19 03:17:47.149824 master-0 kubenswrapper[7776]: E0219 03:17:47.149731 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" podUID="59cea4cb-6374-49b6-97b3-d8a19cc1860f" Feb 19 03:17:47.150645 master-0 kubenswrapper[7776]: E0219 03:17:47.150175 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" podUID="858a717b-a44e-4b8d-9974-7451a89cf104" Feb 19 03:17:47.620167 master-0 kubenswrapper[7776]: I0219 03:17:47.620043 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:17:47.620167 master-0 kubenswrapper[7776]: I0219 03:17:47.620132 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:17:47.620392 master-0 kubenswrapper[7776]: I0219 03:17:47.620054 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:17:49.146226 master-0 kubenswrapper[7776]: I0219 03:17:49.146157 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:17:49.150168 master-0 kubenswrapper[7776]: I0219 03:17:49.150112 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:17:49.161222 master-0 kubenswrapper[7776]: E0219 03:17:49.161137 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" podUID="255784ad-b52a-4c5c-ad15-278865ee2ccb" Feb 19 03:17:49.248112 master-0 kubenswrapper[7776]: I0219 03:17:49.248026 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:17:49.248374 master-0 kubenswrapper[7776]: I0219 03:17:49.248229 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:17:49.252084 master-0 kubenswrapper[7776]: I0219 03:17:49.252007 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:17:49.252322 master-0 kubenswrapper[7776]: I0219 03:17:49.252107 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:17:49.421691 master-0 kubenswrapper[7776]: I0219 03:17:49.421449 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:17:49.421691 master-0 kubenswrapper[7776]: I0219 03:17:49.421449 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:17:49.421691 master-0 kubenswrapper[7776]: I0219 03:17:49.421588 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:17:49.633657 master-0 kubenswrapper[7776]: I0219 03:17:49.632725 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:17:49.901451 master-0 kubenswrapper[7776]: I0219 03:17:49.901407 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn"] Feb 19 03:17:49.901717 master-0 kubenswrapper[7776]: W0219 03:17:49.901631 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858a717b_a44e_4b8d_9974_7451a89cf104.slice/crio-e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b WatchSource:0}: Error finding container e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b: Status 404 returned error can't find the container with id e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b Feb 19 03:17:49.944108 master-0 kubenswrapper[7776]: I0219 03:17:49.943451 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874"] Feb 19 03:17:49.949734 master-0 kubenswrapper[7776]: I0219 03:17:49.949667 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj"] Feb 19 03:17:50.537370 master-0 kubenswrapper[7776]: I0219 03:17:50.537310 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 19 03:17:50.538042 master-0 kubenswrapper[7776]: E0219 03:17:50.537643 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:17:50.538042 master-0 kubenswrapper[7776]: I0219 03:17:50.537667 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:17:50.538042 master-0 kubenswrapper[7776]: I0219 03:17:50.537876 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:17:50.538615 master-0 kubenswrapper[7776]: I0219 03:17:50.538581 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.543547 master-0 kubenswrapper[7776]: I0219 03:17:50.542983 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-rqfgf" Feb 19 03:17:50.546580 master-0 kubenswrapper[7776]: I0219 03:17:50.546555 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 19 03:17:50.556203 master-0 kubenswrapper[7776]: I0219 03:17:50.556103 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 19 03:17:50.647051 master-0 kubenswrapper[7776]: I0219 03:17:50.646978 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" event={"ID":"33bb562f-84e7-4fcb-b008-416c09a5ecf0","Type":"ContainerStarted","Data":"1c7c8e53038635871c96086a40c0ca8629a74201fcad3eb9601bc05b429db386"} Feb 19 03:17:50.647051 master-0 kubenswrapper[7776]: I0219 03:17:50.647037 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" event={"ID":"33bb562f-84e7-4fcb-b008-416c09a5ecf0","Type":"ContainerStarted","Data":"2e210c3c8004e773a0bdb2dc099fdf8b85ea7ff84b49ad9f3a84bc8f3cd8ea30"} Feb 19 03:17:50.649100 master-0 kubenswrapper[7776]: I0219 03:17:50.649039 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" event={"ID":"858a717b-a44e-4b8d-9974-7451a89cf104","Type":"ContainerStarted","Data":"9c0cb9af22022a8f5bf46e2b4ffd1b456e2eae6774e1002da98f2485205bbd5f"} Feb 19 03:17:50.649239 master-0 kubenswrapper[7776]: I0219 03:17:50.649104 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" event={"ID":"858a717b-a44e-4b8d-9974-7451a89cf104","Type":"ContainerStarted","Data":"e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b"} Feb 19 03:17:50.650080 master-0 kubenswrapper[7776]: I0219 03:17:50.650040 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" event={"ID":"59cea4cb-6374-49b6-97b3-d8a19cc1860f","Type":"ContainerStarted","Data":"b1ed6c4c3d12558a0c8f33c888f0552999de0d4f4d9c1efc8cc0619df634d5b4"} Feb 19 03:17:50.665957 master-0 kubenswrapper[7776]: I0219 03:17:50.665882 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.666139 master-0 kubenswrapper[7776]: I0219 03:17:50.666056 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.666139 master-0 kubenswrapper[7776]: I0219 03:17:50.666102 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.767149 master-0 kubenswrapper[7776]: I0219 03:17:50.767021 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.767149 master-0 kubenswrapper[7776]: I0219 03:17:50.767093 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.767149 master-0 kubenswrapper[7776]: I0219 03:17:50.767139 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.768738 master-0 kubenswrapper[7776]: I0219 03:17:50.768686 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.768833 master-0 kubenswrapper[7776]: I0219 03:17:50.768784 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.789335 master-0 kubenswrapper[7776]: I0219 03:17:50.789205 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:50.868622 master-0 kubenswrapper[7776]: I0219 03:17:50.868549 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:17:50.871852 master-0 kubenswrapper[7776]: I0219 03:17:50.871808 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:17:50.875490 master-0 kubenswrapper[7776]: I0219 03:17:50.875461 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:17:51.134529 master-0 kubenswrapper[7776]: I0219 03:17:51.134449 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:17:51.291770 master-0 kubenswrapper[7776]: I0219 03:17:51.291726 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 19 03:17:51.562054 master-0 kubenswrapper[7776]: I0219 03:17:51.561960 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7"] Feb 19 03:17:51.658163 master-0 kubenswrapper[7776]: I0219 03:17:51.658115 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5","Type":"ContainerStarted","Data":"2c5e253906f92c4bc553e34db5acf8d0406570aeec90b10b8f3c9cf4861917cb"} Feb 19 03:17:52.665145 master-0 kubenswrapper[7776]: I0219 03:17:52.665102 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" event={"ID":"255784ad-b52a-4c5c-ad15-278865ee2ccb","Type":"ContainerStarted","Data":"a998a368841f373282c4c48f7a0c3385bacc2f3f776a934e2fcfec35d45e83ad"} Feb 19 03:17:52.666646 master-0 kubenswrapper[7776]: I0219 03:17:52.666617 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" event={"ID":"59cea4cb-6374-49b6-97b3-d8a19cc1860f","Type":"ContainerStarted","Data":"cbd6aef92433753cb8bef0ccd59808f6ac42e2484e91fcc8b2fb170ccf109b5a"} Feb 19 03:17:53.676732 master-0 kubenswrapper[7776]: I0219 03:17:53.676063 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" event={"ID":"33bb562f-84e7-4fcb-b008-416c09a5ecf0","Type":"ContainerStarted","Data":"d82f1d62a4598e11443003ec5ef88e612c12ce42b49596bc6743a6bd63edae81"} Feb 19 03:17:53.677988 master-0 kubenswrapper[7776]: I0219 03:17:53.677918 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" event={"ID":"255784ad-b52a-4c5c-ad15-278865ee2ccb","Type":"ContainerStarted","Data":"2aec2e1174bdd332e67df9c58fa9fb5348acf711dda9571634c9b172daf64f91"} Feb 19 03:17:53.681339 master-0 kubenswrapper[7776]: I0219 03:17:53.680576 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5","Type":"ContainerStarted","Data":"c81c932fbf92f00371681dc495d0483abb59c68940881cbb310e3f5f398e1f87"} Feb 19 03:17:53.686584 master-0 kubenswrapper[7776]: I0219 03:17:53.684818 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" event={"ID":"59cea4cb-6374-49b6-97b3-d8a19cc1860f","Type":"ContainerStarted","Data":"ad0bfd92ff5f5bb264295ad1922ac14a760ffc13b1d2c1c4e73b18ebb635d51f"} Feb 19 03:17:53.695910 master-0 kubenswrapper[7776]: I0219 03:17:53.695857 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" podStartSLOduration=375.412423315 podStartE2EDuration="6m17.695842852s" podCreationTimestamp="2026-02-19 03:11:36 +0000 UTC" firstStartedPulling="2026-02-19 03:17:50.165737247 +0000 UTC m=+776.505421765" lastFinishedPulling="2026-02-19 03:17:52.449156784 +0000 UTC m=+778.788841302" observedRunningTime="2026-02-19 03:17:53.692293481 +0000 UTC m=+780.031978019" watchObservedRunningTime="2026-02-19 03:17:53.695842852 +0000 UTC m=+780.035527370" Feb 19 03:17:53.741646 master-0 kubenswrapper[7776]: I0219 03:17:53.740618 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" podStartSLOduration=374.335856699 podStartE2EDuration="6m16.740600869s" podCreationTimestamp="2026-02-19 03:11:37 +0000 UTC" firstStartedPulling="2026-02-19 03:17:50.045989999 +0000 UTC m=+776.385674547" lastFinishedPulling="2026-02-19 03:17:52.450734179 +0000 UTC m=+778.790418717" observedRunningTime="2026-02-19 03:17:53.719267221 +0000 UTC m=+780.058951739" watchObservedRunningTime="2026-02-19 03:17:53.740600869 +0000 UTC m=+780.080285387" Feb 19 03:17:53.742978 master-0 kubenswrapper[7776]: I0219 03:17:53.742932 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.742925606 podStartE2EDuration="3.742925606s" podCreationTimestamp="2026-02-19 03:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:17:53.739637762 +0000 UTC m=+780.079322280" watchObservedRunningTime="2026-02-19 03:17:53.742925606 +0000 UTC m=+780.082610114" Feb 19 03:17:56.842613 master-0 kubenswrapper[7776]: I0219 03:17:56.842564 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:17:56.843199 master-0 kubenswrapper[7776]: E0219 03:17:56.842772 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:17:57.720890 master-0 kubenswrapper[7776]: I0219 03:17:57.719578 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" event={"ID":"858a717b-a44e-4b8d-9974-7451a89cf104","Type":"ContainerStarted","Data":"9520e9c69cde3dd09d7ca84eed47ddae2eba7ab4d49b4dc72b7a58af4af350c3"} Feb 19 03:17:57.740426 master-0 kubenswrapper[7776]: I0219 03:17:57.740161 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" podStartSLOduration=373.509247901 podStartE2EDuration="6m20.740084021s" podCreationTimestamp="2026-02-19 03:11:37 +0000 UTC" firstStartedPulling="2026-02-19 03:17:50.035539191 +0000 UTC m=+776.375223699" lastFinishedPulling="2026-02-19 03:17:57.266375301 +0000 UTC m=+783.606059819" observedRunningTime="2026-02-19 03:17:57.738699671 +0000 UTC m=+784.078384179" watchObservedRunningTime="2026-02-19 03:17:57.740084021 +0000 UTC m=+784.079768539" Feb 19 03:17:58.293707 master-0 kubenswrapper[7776]: I0219 03:17:58.293411 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 19 03:17:58.294977 master-0 kubenswrapper[7776]: I0219 03:17:58.294957 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.297535 master-0 kubenswrapper[7776]: I0219 03:17:58.297507 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 03:17:58.297535 master-0 kubenswrapper[7776]: I0219 03:17:58.297522 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dcb4l" Feb 19 03:17:58.307935 master-0 kubenswrapper[7776]: I0219 03:17:58.304434 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 19 03:17:58.396221 master-0 kubenswrapper[7776]: I0219 03:17:58.396148 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.396443 master-0 kubenswrapper[7776]: I0219 03:17:58.396247 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.396443 master-0 kubenswrapper[7776]: I0219 03:17:58.396321 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.498636 master-0 kubenswrapper[7776]: I0219 03:17:58.498542 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.498636 master-0 kubenswrapper[7776]: I0219 03:17:58.498628 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.498893 master-0 kubenswrapper[7776]: I0219 03:17:58.498666 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.498893 master-0 kubenswrapper[7776]: I0219 03:17:58.498785 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.498893 master-0 kubenswrapper[7776]: I0219 03:17:58.498801 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.514324 master-0 kubenswrapper[7776]: I0219 03:17:58.514281 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access\") pod \"installer-3-master-0\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:17:58.618199 master-0 kubenswrapper[7776]: I0219 03:17:58.618067 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:18:00.146058 master-0 kubenswrapper[7776]: I0219 03:18:00.145999 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:18:00.146681 master-0 kubenswrapper[7776]: I0219 03:18:00.146237 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" containerName="installer" containerID="cri-o://aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37" gracePeriod=30 Feb 19 03:18:00.223133 master-0 kubenswrapper[7776]: I0219 03:18:00.223085 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 19 03:18:00.229595 master-0 kubenswrapper[7776]: W0219 03:18:00.229559 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod32f3b8a5_a045_4023_80f8_0d4d297102ab.slice/crio-1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb WatchSource:0}: Error finding container 1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb: Status 404 returned error can't find the container with id 1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb Feb 19 03:18:00.740913 master-0 kubenswrapper[7776]: I0219 03:18:00.740751 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"32f3b8a5-a045-4023-80f8-0d4d297102ab","Type":"ContainerStarted","Data":"f67292ebd7452aa7b8fd839fbcb1492de2f1ebff6a04b4076f1b2483b32bdd6d"} Feb 19 03:18:00.740913 master-0 kubenswrapper[7776]: I0219 03:18:00.740806 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"32f3b8a5-a045-4023-80f8-0d4d297102ab","Type":"ContainerStarted","Data":"1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb"} Feb 19 03:18:00.744984 master-0 kubenswrapper[7776]: I0219 03:18:00.744909 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" event={"ID":"255784ad-b52a-4c5c-ad15-278865ee2ccb","Type":"ContainerStarted","Data":"ddb1befe24c9016ca1498c88c06f532024f47f31bafaac084ad456b316b40e0e"} Feb 19 03:18:00.762420 master-0 kubenswrapper[7776]: I0219 03:18:00.761900 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.761877719 podStartE2EDuration="2.761877719s" podCreationTimestamp="2026-02-19 03:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:18:00.761361244 +0000 UTC m=+787.101045762" watchObservedRunningTime="2026-02-19 03:18:00.761877719 +0000 UTC m=+787.101562247" Feb 19 03:18:01.753359 master-0 kubenswrapper[7776]: I0219 03:18:01.753305 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/3.log" Feb 19 03:18:01.754154 master-0 kubenswrapper[7776]: I0219 03:18:01.754109 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/2.log" Feb 19 03:18:01.754750 master-0 kubenswrapper[7776]: I0219 03:18:01.754708 7776 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" exitCode=1 Feb 19 03:18:01.754860 master-0 kubenswrapper[7776]: I0219 03:18:01.754779 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerDied","Data":"1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1"} Feb 19 03:18:01.754913 master-0 kubenswrapper[7776]: I0219 03:18:01.754894 7776 scope.go:117] "RemoveContainer" containerID="0231cbf4aca758c9932d6803291cfbb4b285c17a3486513b446f06ffa1a001c4" Feb 19 03:18:01.755612 master-0 kubenswrapper[7776]: I0219 03:18:01.755582 7776 scope.go:117] "RemoveContainer" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" Feb 19 03:18:01.755786 master-0 kubenswrapper[7776]: E0219 03:18:01.755754 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:18:02.187646 master-0 kubenswrapper[7776]: I0219 03:18:02.187528 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" podStartSLOduration=376.977082948 podStartE2EDuration="6m24.187499046s" podCreationTimestamp="2026-02-19 03:11:38 +0000 UTC" firstStartedPulling="2026-02-19 03:17:52.716833483 +0000 UTC m=+779.056518001" lastFinishedPulling="2026-02-19 03:17:59.927249571 +0000 UTC m=+786.266934099" observedRunningTime="2026-02-19 03:18:00.782334123 +0000 UTC m=+787.122018651" watchObservedRunningTime="2026-02-19 03:18:02.187499046 +0000 UTC m=+788.527183604" Feb 19 03:18:02.763499 master-0 kubenswrapper[7776]: I0219 03:18:02.763461 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/3.log" Feb 19 03:18:03.345372 master-0 kubenswrapper[7776]: I0219 03:18:03.345245 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 19 03:18:03.346205 master-0 kubenswrapper[7776]: I0219 03:18:03.346160 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.364414 master-0 kubenswrapper[7776]: I0219 03:18:03.364354 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 19 03:18:03.374761 master-0 kubenswrapper[7776]: I0219 03:18:03.374694 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.375019 master-0 kubenswrapper[7776]: I0219 03:18:03.374784 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.375019 master-0 kubenswrapper[7776]: I0219 03:18:03.374833 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.477467 master-0 kubenswrapper[7776]: I0219 03:18:03.477389 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.477696 master-0 kubenswrapper[7776]: I0219 03:18:03.477518 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.477696 master-0 kubenswrapper[7776]: I0219 03:18:03.477597 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.477696 master-0 kubenswrapper[7776]: I0219 03:18:03.477686 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.477967 master-0 kubenswrapper[7776]: I0219 03:18:03.477887 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.495685 master-0 kubenswrapper[7776]: I0219 03:18:03.495626 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.682158 master-0 kubenswrapper[7776]: I0219 03:18:03.682031 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:18:03.683422 master-0 kubenswrapper[7776]: I0219 03:18:03.683355 7776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:18:03.684546 master-0 kubenswrapper[7776]: I0219 03:18:03.683886 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://d00d0015e8bc8366633040b3a2395621233f7e465c498eaceabf1c2ca81a68df" gracePeriod=30 Feb 19 03:18:03.684546 master-0 kubenswrapper[7776]: I0219 03:18:03.683956 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://0d59a154aee140da2db56a6e0463015b3387b4ee37b044b39e5717b27d05498e" gracePeriod=30 Feb 19 03:18:03.684546 master-0 kubenswrapper[7776]: I0219 03:18:03.684012 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://59889423cb55bd5f516727f3ea448fae392c406053adfdbf990a3c929b1d542d" gracePeriod=30 Feb 19 03:18:03.684546 master-0 kubenswrapper[7776]: I0219 03:18:03.684068 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://0133c09f4df374eb22f9b8a85932a0aa0def6e89f6e8ee052bbfb01df95791d1" gracePeriod=30 Feb 19 03:18:03.684546 master-0 kubenswrapper[7776]: I0219 03:18:03.684192 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://5f196045e7a49065565ab56d461035e763d23606fb829b8bba14d2bd33107c85" gracePeriod=30 Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.686573 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.686993 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687011 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687049 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687058 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687074 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687080 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687098 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687105 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687117 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687123 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687142 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687149 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687164 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687170 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: E0219 03:18:03.687177 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687183 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687495 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687516 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687525 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687540 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 19 03:18:03.687515 master-0 kubenswrapper[7776]: I0219 03:18:03.687554 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 19 03:18:03.781854 master-0 kubenswrapper[7776]: I0219 03:18:03.781458 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.782434 master-0 kubenswrapper[7776]: I0219 03:18:03.781890 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.782434 master-0 kubenswrapper[7776]: I0219 03:18:03.781947 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.782434 master-0 kubenswrapper[7776]: I0219 03:18:03.782018 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.782434 master-0 kubenswrapper[7776]: I0219 03:18:03.782076 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.782434 master-0 kubenswrapper[7776]: I0219 03:18:03.782157 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.882861 master-0 kubenswrapper[7776]: I0219 03:18:03.882817 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.882972 master-0 kubenswrapper[7776]: I0219 03:18:03.882897 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.882972 master-0 kubenswrapper[7776]: I0219 03:18:03.882918 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.882972 master-0 kubenswrapper[7776]: I0219 03:18:03.882930 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.882972 master-0 kubenswrapper[7776]: I0219 03:18:03.882965 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883128 master-0 kubenswrapper[7776]: I0219 03:18:03.883038 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883128 master-0 kubenswrapper[7776]: I0219 03:18:03.883086 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883357 master-0 kubenswrapper[7776]: I0219 03:18:03.883303 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883479 master-0 kubenswrapper[7776]: I0219 03:18:03.883450 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883584 master-0 kubenswrapper[7776]: I0219 03:18:03.883563 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883675 master-0 kubenswrapper[7776]: I0219 03:18:03.883651 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:03.883785 master-0 kubenswrapper[7776]: I0219 03:18:03.883758 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:18:04.784028 master-0 kubenswrapper[7776]: I0219 03:18:04.783981 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:18:04.785317 master-0 kubenswrapper[7776]: I0219 03:18:04.785298 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:18:04.788081 master-0 kubenswrapper[7776]: I0219 03:18:04.788039 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="5f196045e7a49065565ab56d461035e763d23606fb829b8bba14d2bd33107c85" exitCode=2 Feb 19 03:18:04.788125 master-0 kubenswrapper[7776]: I0219 03:18:04.788091 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="0d59a154aee140da2db56a6e0463015b3387b4ee37b044b39e5717b27d05498e" exitCode=0 Feb 19 03:18:04.788159 master-0 kubenswrapper[7776]: I0219 03:18:04.788126 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="59889423cb55bd5f516727f3ea448fae392c406053adfdbf990a3c929b1d542d" exitCode=2 Feb 19 03:18:05.464814 master-0 kubenswrapper[7776]: I0219 03:18:05.464736 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_4aef097d-bea5-404d-b26b-aed9142ddf14/installer/0.log" Feb 19 03:18:05.464814 master-0 kubenswrapper[7776]: I0219 03:18:05.464809 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:18:05.607416 master-0 kubenswrapper[7776]: I0219 03:18:05.607335 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock\") pod \"4aef097d-bea5-404d-b26b-aed9142ddf14\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " Feb 19 03:18:05.607623 master-0 kubenswrapper[7776]: I0219 03:18:05.607480 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock" (OuterVolumeSpecName: "var-lock") pod "4aef097d-bea5-404d-b26b-aed9142ddf14" (UID: "4aef097d-bea5-404d-b26b-aed9142ddf14"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:05.607623 master-0 kubenswrapper[7776]: I0219 03:18:05.607541 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir\") pod \"4aef097d-bea5-404d-b26b-aed9142ddf14\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " Feb 19 03:18:05.607687 master-0 kubenswrapper[7776]: I0219 03:18:05.607627 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access\") pod \"4aef097d-bea5-404d-b26b-aed9142ddf14\" (UID: \"4aef097d-bea5-404d-b26b-aed9142ddf14\") " Feb 19 03:18:05.607752 master-0 kubenswrapper[7776]: I0219 03:18:05.607716 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4aef097d-bea5-404d-b26b-aed9142ddf14" (UID: "4aef097d-bea5-404d-b26b-aed9142ddf14"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:05.608199 master-0 kubenswrapper[7776]: I0219 03:18:05.608148 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:05.608199 master-0 kubenswrapper[7776]: I0219 03:18:05.608191 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aef097d-bea5-404d-b26b-aed9142ddf14-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:05.612453 master-0 kubenswrapper[7776]: I0219 03:18:05.612390 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4aef097d-bea5-404d-b26b-aed9142ddf14" (UID: "4aef097d-bea5-404d-b26b-aed9142ddf14"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:18:05.709407 master-0 kubenswrapper[7776]: I0219 03:18:05.709342 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aef097d-bea5-404d-b26b-aed9142ddf14-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:05.801123 master-0 kubenswrapper[7776]: I0219 03:18:05.801020 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_4aef097d-bea5-404d-b26b-aed9142ddf14/installer/0.log" Feb 19 03:18:05.802171 master-0 kubenswrapper[7776]: I0219 03:18:05.801148 7776 generic.go:334] "Generic (PLEG): container finished" podID="4aef097d-bea5-404d-b26b-aed9142ddf14" containerID="aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37" exitCode=1 Feb 19 03:18:05.802171 master-0 kubenswrapper[7776]: I0219 03:18:05.801223 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4aef097d-bea5-404d-b26b-aed9142ddf14","Type":"ContainerDied","Data":"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37"} Feb 19 03:18:05.802171 master-0 kubenswrapper[7776]: I0219 03:18:05.801318 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4aef097d-bea5-404d-b26b-aed9142ddf14","Type":"ContainerDied","Data":"e99d11f1b7f7b440e7693112746ef8c230d71c911d07941ae7bb0938acb8a034"} Feb 19 03:18:05.802171 master-0 kubenswrapper[7776]: I0219 03:18:05.801369 7776 scope.go:117] "RemoveContainer" containerID="aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37" Feb 19 03:18:05.802171 master-0 kubenswrapper[7776]: I0219 03:18:05.801833 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 19 03:18:05.831088 master-0 kubenswrapper[7776]: I0219 03:18:05.831033 7776 scope.go:117] "RemoveContainer" containerID="aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37" Feb 19 03:18:05.831840 master-0 kubenswrapper[7776]: E0219 03:18:05.831786 7776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37\": container with ID starting with aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37 not found: ID does not exist" containerID="aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37" Feb 19 03:18:05.831907 master-0 kubenswrapper[7776]: I0219 03:18:05.831849 7776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37"} err="failed to get container status \"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37\": rpc error: code = NotFound desc = could not find container \"aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37\": container with ID starting with aac4e0d0cb8dd7e31e28c09bfcb8327fc06e478ac97409246e2b67aaf5aa1a37 not found: ID does not exist" Feb 19 03:18:10.843460 master-0 kubenswrapper[7776]: I0219 03:18:10.843406 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:18:10.844359 master-0 kubenswrapper[7776]: E0219 03:18:10.843726 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" podUID="af2be4f9-f632-4a72-8f39-c96954403edc" Feb 19 03:18:14.446502 master-0 kubenswrapper[7776]: E0219 03:18:14.446367 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:14.873680 master-0 kubenswrapper[7776]: I0219 03:18:14.873547 7776 generic.go:334] "Generic (PLEG): container finished" podID="76470062-ab83-47ed-a669-deeb71996548" containerID="a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd" exitCode=0 Feb 19 03:18:14.873680 master-0 kubenswrapper[7776]: I0219 03:18:14.873606 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerDied","Data":"a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd"} Feb 19 03:18:14.873680 master-0 kubenswrapper[7776]: I0219 03:18:14.873638 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366"} Feb 19 03:18:14.873680 master-0 kubenswrapper[7776]: I0219 03:18:14.873657 7776 scope.go:117] "RemoveContainer" containerID="fc23281c8544d5ae223b75148a35d1646e5aae76cd18024121c83e27448b516d" Feb 19 03:18:15.761631 master-0 kubenswrapper[7776]: I0219 03:18:15.761542 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:18:15.764784 master-0 kubenswrapper[7776]: I0219 03:18:15.764711 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:15.764784 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:15.764784 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:15.764784 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:15.765310 master-0 kubenswrapper[7776]: I0219 03:18:15.764810 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:16.764530 master-0 kubenswrapper[7776]: I0219 03:18:16.764442 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:16.764530 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:16.764530 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:16.764530 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:16.764530 master-0 kubenswrapper[7776]: I0219 03:18:16.764527 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:16.842833 master-0 kubenswrapper[7776]: I0219 03:18:16.842769 7776 scope.go:117] "RemoveContainer" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" Feb 19 03:18:16.843082 master-0 kubenswrapper[7776]: E0219 03:18:16.842994 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:18:17.761826 master-0 kubenswrapper[7776]: I0219 03:18:17.761733 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:18:17.764633 master-0 kubenswrapper[7776]: I0219 03:18:17.764577 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:17.764633 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:17.764633 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:17.764633 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:17.765540 master-0 kubenswrapper[7776]: I0219 03:18:17.765411 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:17.906196 master-0 kubenswrapper[7776]: I0219 03:18:17.906120 7776 generic.go:334] "Generic (PLEG): container finished" podID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerID="21e26a22b1efe279782f76fa7cfe3a983a36a3e7247df0cc7bcc0fa254258e19" exitCode=0 Feb 19 03:18:17.906196 master-0 kubenswrapper[7776]: I0219 03:18:17.906193 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3","Type":"ContainerDied","Data":"21e26a22b1efe279782f76fa7cfe3a983a36a3e7247df0cc7bcc0fa254258e19"} Feb 19 03:18:18.223093 master-0 kubenswrapper[7776]: E0219 03:18:18.222942 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:18:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:18:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:18:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:18:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:18.764911 master-0 kubenswrapper[7776]: I0219 03:18:18.764858 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:18.764911 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:18.764911 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:18.764911 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:18.764911 master-0 kubenswrapper[7776]: I0219 03:18:18.764912 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:18.917892 master-0 kubenswrapper[7776]: I0219 03:18:18.917822 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:18:18.917892 master-0 kubenswrapper[7776]: I0219 03:18:18.917878 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="6e39b4ae8e2c1020e55e9a8991002fceb2451697ce51c87e07c50c9ac50db7bc" exitCode=1 Feb 19 03:18:18.918405 master-0 kubenswrapper[7776]: I0219 03:18:18.917929 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"6e39b4ae8e2c1020e55e9a8991002fceb2451697ce51c87e07c50c9ac50db7bc"} Feb 19 03:18:18.919284 master-0 kubenswrapper[7776]: I0219 03:18:18.918597 7776 scope.go:117] "RemoveContainer" containerID="6e39b4ae8e2c1020e55e9a8991002fceb2451697ce51c87e07c50c9ac50db7bc" Feb 19 03:18:19.251251 master-0 kubenswrapper[7776]: I0219 03:18:19.251156 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 19 03:18:19.423531 master-0 kubenswrapper[7776]: I0219 03:18:19.423433 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock\") pod \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " Feb 19 03:18:19.423841 master-0 kubenswrapper[7776]: I0219 03:18:19.423565 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir\") pod \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " Feb 19 03:18:19.423841 master-0 kubenswrapper[7776]: I0219 03:18:19.423641 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access\") pod \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\" (UID: \"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3\") " Feb 19 03:18:19.423841 master-0 kubenswrapper[7776]: I0219 03:18:19.423644 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock" (OuterVolumeSpecName: "var-lock") pod "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" (UID: "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:19.423841 master-0 kubenswrapper[7776]: I0219 03:18:19.423710 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" (UID: "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:19.424237 master-0 kubenswrapper[7776]: I0219 03:18:19.424199 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:19.424237 master-0 kubenswrapper[7776]: I0219 03:18:19.424226 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:19.428192 master-0 kubenswrapper[7776]: I0219 03:18:19.428061 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" (UID: "60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:18:19.525632 master-0 kubenswrapper[7776]: I0219 03:18:19.525414 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:19.764330 master-0 kubenswrapper[7776]: I0219 03:18:19.764244 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:19.764330 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:19.764330 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:19.764330 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:19.764330 master-0 kubenswrapper[7776]: I0219 03:18:19.764340 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:19.929297 master-0 kubenswrapper[7776]: I0219 03:18:19.929212 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:18:19.930174 master-0 kubenswrapper[7776]: I0219 03:18:19.929415 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} Feb 19 03:18:19.931965 master-0 kubenswrapper[7776]: I0219 03:18:19.931908 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3","Type":"ContainerDied","Data":"951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba"} Feb 19 03:18:19.932058 master-0 kubenswrapper[7776]: I0219 03:18:19.931963 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba" Feb 19 03:18:19.932058 master-0 kubenswrapper[7776]: I0219 03:18:19.931975 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 19 03:18:20.763048 master-0 kubenswrapper[7776]: I0219 03:18:20.762955 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:20.763048 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:20.763048 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:20.763048 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:20.763419 master-0 kubenswrapper[7776]: I0219 03:18:20.763072 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:21.763712 master-0 kubenswrapper[7776]: I0219 03:18:21.763614 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:21.763712 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:21.763712 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:21.763712 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:21.763712 master-0 kubenswrapper[7776]: I0219 03:18:21.763711 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:22.765008 master-0 kubenswrapper[7776]: I0219 03:18:22.764890 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:22.765008 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:22.765008 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:22.765008 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:22.765008 master-0 kubenswrapper[7776]: I0219 03:18:22.764973 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:23.689307 master-0 kubenswrapper[7776]: I0219 03:18:23.689242 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:18:23.689882 master-0 kubenswrapper[7776]: I0219 03:18:23.689813 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:18:23.694564 master-0 kubenswrapper[7776]: I0219 03:18:23.694509 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:18:23.763199 master-0 kubenswrapper[7776]: I0219 03:18:23.763088 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:23.763199 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:23.763199 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:23.763199 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:23.763645 master-0 kubenswrapper[7776]: I0219 03:18:23.763206 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:23.836328 master-0 kubenswrapper[7776]: I0219 03:18:23.836215 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:18:24.447096 master-0 kubenswrapper[7776]: E0219 03:18:24.446954 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:24.763970 master-0 kubenswrapper[7776]: I0219 03:18:24.763868 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:24.763970 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:24.763970 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:24.763970 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:24.763970 master-0 kubenswrapper[7776]: I0219 03:18:24.763956 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:24.844611 master-0 kubenswrapper[7776]: I0219 03:18:24.844455 7776 scope.go:117] "RemoveContainer" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" Feb 19 03:18:25.765515 master-0 kubenswrapper[7776]: I0219 03:18:25.765388 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:25.765515 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:25.765515 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:25.765515 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:25.765833 master-0 kubenswrapper[7776]: I0219 03:18:25.765536 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:25.980212 master-0 kubenswrapper[7776]: I0219 03:18:25.980120 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:18:25.981191 master-0 kubenswrapper[7776]: I0219 03:18:25.981157 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"84a298132203055f7840f9fd2b7d17cccac5e629935233689e7700b335adbeaf"} Feb 19 03:18:26.763120 master-0 kubenswrapper[7776]: I0219 03:18:26.763049 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:26.763120 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:26.763120 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:26.763120 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:26.763120 master-0 kubenswrapper[7776]: I0219 03:18:26.763122 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:27.764880 master-0 kubenswrapper[7776]: I0219 03:18:27.764740 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:27.764880 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:27.764880 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:27.764880 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:27.766118 master-0 kubenswrapper[7776]: I0219 03:18:27.764874 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:28.223743 master-0 kubenswrapper[7776]: E0219 03:18:28.223688 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:28.764959 master-0 kubenswrapper[7776]: I0219 03:18:28.764842 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:28.764959 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:28.764959 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:28.764959 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:28.764959 master-0 kubenswrapper[7776]: I0219 03:18:28.764945 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:29.763989 master-0 kubenswrapper[7776]: I0219 03:18:29.763895 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:29.763989 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:29.763989 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:29.763989 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:29.764365 master-0 kubenswrapper[7776]: I0219 03:18:29.764008 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:30.764128 master-0 kubenswrapper[7776]: I0219 03:18:30.763993 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:30.764128 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:30.764128 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:30.764128 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:30.764128 master-0 kubenswrapper[7776]: I0219 03:18:30.764089 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:31.764080 master-0 kubenswrapper[7776]: I0219 03:18:31.763985 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:31.764080 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:31.764080 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:31.764080 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:31.764857 master-0 kubenswrapper[7776]: I0219 03:18:31.764087 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:31.842845 master-0 kubenswrapper[7776]: I0219 03:18:31.842771 7776 scope.go:117] "RemoveContainer" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" Feb 19 03:18:31.843280 master-0 kubenswrapper[7776]: E0219 03:18:31.843207 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:18:32.765011 master-0 kubenswrapper[7776]: I0219 03:18:32.764873 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:32.765011 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:32.765011 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:32.765011 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:32.765728 master-0 kubenswrapper[7776]: I0219 03:18:32.765030 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:33.700497 master-0 kubenswrapper[7776]: I0219 03:18:33.699027 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:18:33.763603 master-0 kubenswrapper[7776]: I0219 03:18:33.763531 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:33.763603 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:33.763603 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:33.763603 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:33.763868 master-0 kubenswrapper[7776]: I0219 03:18:33.763635 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:34.047936 master-0 kubenswrapper[7776]: I0219 03:18:34.047800 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:18:34.048889 master-0 kubenswrapper[7776]: I0219 03:18:34.048857 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:18:34.049536 master-0 kubenswrapper[7776]: I0219 03:18:34.049507 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 19 03:18:34.050609 master-0 kubenswrapper[7776]: I0219 03:18:34.050571 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="0133c09f4df374eb22f9b8a85932a0aa0def6e89f6e8ee052bbfb01df95791d1" exitCode=0 Feb 19 03:18:34.050609 master-0 kubenswrapper[7776]: I0219 03:18:34.050603 7776 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="d00d0015e8bc8366633040b3a2395621233f7e465c498eaceabf1c2ca81a68df" exitCode=137 Feb 19 03:18:34.278016 master-0 kubenswrapper[7776]: I0219 03:18:34.277947 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:18:34.279302 master-0 kubenswrapper[7776]: I0219 03:18:34.279237 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:18:34.280549 master-0 kubenswrapper[7776]: I0219 03:18:34.280514 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 19 03:18:34.282029 master-0 kubenswrapper[7776]: I0219 03:18:34.281982 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:18:34.350838 master-0 kubenswrapper[7776]: I0219 03:18:34.350746 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.350838 master-0 kubenswrapper[7776]: I0219 03:18:34.350823 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350855 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350869 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350904 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350947 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350962 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.351120 master-0 kubenswrapper[7776]: I0219 03:18:34.350985 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.351618 master-0 kubenswrapper[7776]: I0219 03:18:34.351581 7776 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.351618 master-0 kubenswrapper[7776]: I0219 03:18:34.351613 7776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.351719 master-0 kubenswrapper[7776]: I0219 03:18:34.351626 7776 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.351719 master-0 kubenswrapper[7776]: I0219 03:18:34.351638 7776 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.447769 master-0 kubenswrapper[7776]: E0219 03:18:34.447652 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:34.452632 master-0 kubenswrapper[7776]: I0219 03:18:34.452577 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.452632 master-0 kubenswrapper[7776]: I0219 03:18:34.452633 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 19 03:18:34.452893 master-0 kubenswrapper[7776]: I0219 03:18:34.452705 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.452893 master-0 kubenswrapper[7776]: I0219 03:18:34.452746 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:34.453142 master-0 kubenswrapper[7776]: I0219 03:18:34.453096 7776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.453142 master-0 kubenswrapper[7776]: I0219 03:18:34.453126 7776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:34.764748 master-0 kubenswrapper[7776]: I0219 03:18:34.764666 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:34.764748 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:34.764748 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:34.764748 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:34.765228 master-0 kubenswrapper[7776]: I0219 03:18:34.764787 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:35.233788 master-0 kubenswrapper[7776]: I0219 03:18:35.233756 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 19 03:18:35.235384 master-0 kubenswrapper[7776]: I0219 03:18:35.235362 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 19 03:18:35.236188 master-0 kubenswrapper[7776]: I0219 03:18:35.236171 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 19 03:18:35.237105 master-0 kubenswrapper[7776]: I0219 03:18:35.237089 7776 scope.go:117] "RemoveContainer" containerID="5f196045e7a49065565ab56d461035e763d23606fb829b8bba14d2bd33107c85" Feb 19 03:18:35.237243 master-0 kubenswrapper[7776]: I0219 03:18:35.237212 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:18:35.259542 master-0 kubenswrapper[7776]: I0219 03:18:35.259494 7776 scope.go:117] "RemoveContainer" containerID="0d59a154aee140da2db56a6e0463015b3387b4ee37b044b39e5717b27d05498e" Feb 19 03:18:35.280852 master-0 kubenswrapper[7776]: I0219 03:18:35.280821 7776 scope.go:117] "RemoveContainer" containerID="59889423cb55bd5f516727f3ea448fae392c406053adfdbf990a3c929b1d542d" Feb 19 03:18:35.295245 master-0 kubenswrapper[7776]: I0219 03:18:35.295205 7776 scope.go:117] "RemoveContainer" containerID="0133c09f4df374eb22f9b8a85932a0aa0def6e89f6e8ee052bbfb01df95791d1" Feb 19 03:18:35.316770 master-0 kubenswrapper[7776]: I0219 03:18:35.316706 7776 scope.go:117] "RemoveContainer" containerID="d00d0015e8bc8366633040b3a2395621233f7e465c498eaceabf1c2ca81a68df" Feb 19 03:18:35.331602 master-0 kubenswrapper[7776]: I0219 03:18:35.331576 7776 scope.go:117] "RemoveContainer" containerID="046d70c2b21433494090acc4c51a4da67355986430805c8b776a5852975555f0" Feb 19 03:18:35.352225 master-0 kubenswrapper[7776]: I0219 03:18:35.352128 7776 scope.go:117] "RemoveContainer" containerID="baf56418e5f8bfbb1b0b3b62a17157021582596ac9b77253725abcedbc9830bb" Feb 19 03:18:35.382918 master-0 kubenswrapper[7776]: I0219 03:18:35.382781 7776 scope.go:117] "RemoveContainer" containerID="0b7862f5c14abc35c6d3864be1a69aaa8c3ca56dfc67a222771c4ef72c815739" Feb 19 03:18:35.764231 master-0 kubenswrapper[7776]: I0219 03:18:35.764154 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:35.764231 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:35.764231 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:35.764231 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:35.764231 master-0 kubenswrapper[7776]: I0219 03:18:35.764229 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:35.853666 master-0 kubenswrapper[7776]: I0219 03:18:35.853603 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 19 03:18:36.764760 master-0 kubenswrapper[7776]: I0219 03:18:36.764669 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:36.764760 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:36.764760 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:36.764760 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:36.764760 master-0 kubenswrapper[7776]: I0219 03:18:36.764761 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:37.709991 master-0 kubenswrapper[7776]: E0219 03:18:37.709879 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18958790a0db363e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:18:03.683919422 +0000 UTC m=+790.023603940,LastTimestamp:2026-02-19 03:18:03.683919422 +0000 UTC m=+790.023603940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:18:37.764869 master-0 kubenswrapper[7776]: I0219 03:18:37.764803 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:37.764869 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:37.764869 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:37.764869 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:37.766190 master-0 kubenswrapper[7776]: I0219 03:18:37.766071 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:38.224455 master-0 kubenswrapper[7776]: E0219 03:18:38.224378 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:38.263879 master-0 kubenswrapper[7776]: I0219 03:18:38.263805 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5/installer/0.log" Feb 19 03:18:38.264180 master-0 kubenswrapper[7776]: I0219 03:18:38.263916 7776 generic.go:334] "Generic (PLEG): container finished" podID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerID="c81c932fbf92f00371681dc495d0483abb59c68940881cbb310e3f5f398e1f87" exitCode=1 Feb 19 03:18:38.264180 master-0 kubenswrapper[7776]: I0219 03:18:38.263973 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5","Type":"ContainerDied","Data":"c81c932fbf92f00371681dc495d0483abb59c68940881cbb310e3f5f398e1f87"} Feb 19 03:18:38.766295 master-0 kubenswrapper[7776]: I0219 03:18:38.763770 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:38.766295 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:38.766295 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:38.766295 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:38.766295 master-0 kubenswrapper[7776]: I0219 03:18:38.763835 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:39.632976 master-0 kubenswrapper[7776]: I0219 03:18:39.632914 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5/installer/0.log" Feb 19 03:18:39.632976 master-0 kubenswrapper[7776]: I0219 03:18:39.632992 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:18:39.763868 master-0 kubenswrapper[7776]: I0219 03:18:39.763769 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:39.763868 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:39.763868 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:39.763868 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:39.763868 master-0 kubenswrapper[7776]: I0219 03:18:39.763877 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:39.791962 master-0 kubenswrapper[7776]: I0219 03:18:39.791907 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock\") pod \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792042 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access\") pod \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792069 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir\") pod \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\" (UID: \"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5\") " Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792127 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock" (OuterVolumeSpecName: "var-lock") pod "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" (UID: "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792174 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" (UID: "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792553 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:39.792589 master-0 kubenswrapper[7776]: I0219 03:18:39.792582 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:39.796565 master-0 kubenswrapper[7776]: I0219 03:18:39.796485 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" (UID: "e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:18:39.897177 master-0 kubenswrapper[7776]: I0219 03:18:39.897097 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:40.280402 master-0 kubenswrapper[7776]: I0219 03:18:40.280145 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5/installer/0.log" Feb 19 03:18:40.280886 master-0 kubenswrapper[7776]: I0219 03:18:40.280837 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5","Type":"ContainerDied","Data":"2c5e253906f92c4bc553e34db5acf8d0406570aeec90b10b8f3c9cf4861917cb"} Feb 19 03:18:40.281033 master-0 kubenswrapper[7776]: I0219 03:18:40.281014 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5e253906f92c4bc553e34db5acf8d0406570aeec90b10b8f3c9cf4861917cb" Feb 19 03:18:40.281159 master-0 kubenswrapper[7776]: I0219 03:18:40.280903 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:18:40.764928 master-0 kubenswrapper[7776]: I0219 03:18:40.764837 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:40.764928 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:40.764928 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:40.764928 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:40.765370 master-0 kubenswrapper[7776]: I0219 03:18:40.764936 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:41.762962 master-0 kubenswrapper[7776]: I0219 03:18:41.762766 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:41.762962 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:41.762962 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:41.762962 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:41.762962 master-0 kubenswrapper[7776]: I0219 03:18:41.762871 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:42.763090 master-0 kubenswrapper[7776]: I0219 03:18:42.763014 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:42.763090 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:42.763090 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:42.763090 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:42.763845 master-0 kubenswrapper[7776]: I0219 03:18:42.763125 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:43.764142 master-0 kubenswrapper[7776]: I0219 03:18:43.764063 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:43.764142 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:43.764142 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:43.764142 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:43.764142 master-0 kubenswrapper[7776]: I0219 03:18:43.764133 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:43.842094 master-0 kubenswrapper[7776]: I0219 03:18:43.842020 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:18:43.861552 master-0 kubenswrapper[7776]: I0219 03:18:43.861503 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:18:43.861552 master-0 kubenswrapper[7776]: I0219 03:18:43.861546 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:18:44.449037 master-0 kubenswrapper[7776]: E0219 03:18:44.448934 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:44.764615 master-0 kubenswrapper[7776]: I0219 03:18:44.764536 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:44.764615 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:44.764615 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:44.764615 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:44.765326 master-0 kubenswrapper[7776]: I0219 03:18:44.764623 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:45.763930 master-0 kubenswrapper[7776]: I0219 03:18:45.763705 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:45.763930 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:45.763930 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:45.763930 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:45.763930 master-0 kubenswrapper[7776]: I0219 03:18:45.763797 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:45.843872 master-0 kubenswrapper[7776]: I0219 03:18:45.843240 7776 scope.go:117] "RemoveContainer" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" Feb 19 03:18:46.328116 master-0 kubenswrapper[7776]: I0219 03:18:46.328037 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/3.log" Feb 19 03:18:46.328601 master-0 kubenswrapper[7776]: I0219 03:18:46.328542 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5"} Feb 19 03:18:46.331164 master-0 kubenswrapper[7776]: I0219 03:18:46.331125 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_32f3b8a5-a045-4023-80f8-0d4d297102ab/installer/0.log" Feb 19 03:18:46.331164 master-0 kubenswrapper[7776]: I0219 03:18:46.331163 7776 generic.go:334] "Generic (PLEG): container finished" podID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerID="f67292ebd7452aa7b8fd839fbcb1492de2f1ebff6a04b4076f1b2483b32bdd6d" exitCode=1 Feb 19 03:18:46.331488 master-0 kubenswrapper[7776]: I0219 03:18:46.331184 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"32f3b8a5-a045-4023-80f8-0d4d297102ab","Type":"ContainerDied","Data":"f67292ebd7452aa7b8fd839fbcb1492de2f1ebff6a04b4076f1b2483b32bdd6d"} Feb 19 03:18:46.764342 master-0 kubenswrapper[7776]: I0219 03:18:46.764226 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:46.764342 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:46.764342 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:46.764342 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:46.764342 master-0 kubenswrapper[7776]: I0219 03:18:46.764335 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:47.696751 master-0 kubenswrapper[7776]: I0219 03:18:47.696652 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_32f3b8a5-a045-4023-80f8-0d4d297102ab/installer/0.log" Feb 19 03:18:47.696751 master-0 kubenswrapper[7776]: I0219 03:18:47.696754 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:18:47.727634 master-0 kubenswrapper[7776]: I0219 03:18:47.727529 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access\") pod \"32f3b8a5-a045-4023-80f8-0d4d297102ab\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " Feb 19 03:18:47.727634 master-0 kubenswrapper[7776]: I0219 03:18:47.727605 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir\") pod \"32f3b8a5-a045-4023-80f8-0d4d297102ab\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " Feb 19 03:18:47.727990 master-0 kubenswrapper[7776]: I0219 03:18:47.727704 7776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock\") pod \"32f3b8a5-a045-4023-80f8-0d4d297102ab\" (UID: \"32f3b8a5-a045-4023-80f8-0d4d297102ab\") " Feb 19 03:18:47.727990 master-0 kubenswrapper[7776]: I0219 03:18:47.727796 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "32f3b8a5-a045-4023-80f8-0d4d297102ab" (UID: "32f3b8a5-a045-4023-80f8-0d4d297102ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:47.728116 master-0 kubenswrapper[7776]: I0219 03:18:47.728002 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock" (OuterVolumeSpecName: "var-lock") pod "32f3b8a5-a045-4023-80f8-0d4d297102ab" (UID: "32f3b8a5-a045-4023-80f8-0d4d297102ab"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:18:47.728116 master-0 kubenswrapper[7776]: I0219 03:18:47.728057 7776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:47.731046 master-0 kubenswrapper[7776]: I0219 03:18:47.730968 7776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "32f3b8a5-a045-4023-80f8-0d4d297102ab" (UID: "32f3b8a5-a045-4023-80f8-0d4d297102ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:18:47.764713 master-0 kubenswrapper[7776]: I0219 03:18:47.764633 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:47.764713 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:47.764713 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:47.764713 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:47.765310 master-0 kubenswrapper[7776]: I0219 03:18:47.764737 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:47.829768 master-0 kubenswrapper[7776]: I0219 03:18:47.829676 7776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32f3b8a5-a045-4023-80f8-0d4d297102ab-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:47.829977 master-0 kubenswrapper[7776]: I0219 03:18:47.829766 7776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f3b8a5-a045-4023-80f8-0d4d297102ab-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:18:48.225570 master-0 kubenswrapper[7776]: E0219 03:18:48.225464 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:48.349918 master-0 kubenswrapper[7776]: I0219 03:18:48.349823 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_32f3b8a5-a045-4023-80f8-0d4d297102ab/installer/0.log" Feb 19 03:18:48.349918 master-0 kubenswrapper[7776]: I0219 03:18:48.349906 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"32f3b8a5-a045-4023-80f8-0d4d297102ab","Type":"ContainerDied","Data":"1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb"} Feb 19 03:18:48.350377 master-0 kubenswrapper[7776]: I0219 03:18:48.349944 7776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb" Feb 19 03:18:48.350377 master-0 kubenswrapper[7776]: I0219 03:18:48.350045 7776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:18:48.762698 master-0 kubenswrapper[7776]: I0219 03:18:48.762636 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:48.762698 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:48.762698 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:48.762698 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:48.762698 master-0 kubenswrapper[7776]: I0219 03:18:48.762705 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:49.764058 master-0 kubenswrapper[7776]: I0219 03:18:49.763948 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:49.764058 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:49.764058 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:49.764058 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:49.764058 master-0 kubenswrapper[7776]: I0219 03:18:49.764042 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:50.763937 master-0 kubenswrapper[7776]: I0219 03:18:50.763838 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:50.763937 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:50.763937 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:50.763937 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:50.764788 master-0 kubenswrapper[7776]: I0219 03:18:50.763959 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:51.763840 master-0 kubenswrapper[7776]: I0219 03:18:51.763742 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:51.763840 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:51.763840 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:51.763840 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:51.764217 master-0 kubenswrapper[7776]: I0219 03:18:51.763840 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:52.764406 master-0 kubenswrapper[7776]: I0219 03:18:52.764155 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:52.764406 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:52.764406 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:52.764406 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:52.764406 master-0 kubenswrapper[7776]: I0219 03:18:52.764251 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:53.384810 master-0 kubenswrapper[7776]: I0219 03:18:53.384715 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/1.log" Feb 19 03:18:53.385807 master-0 kubenswrapper[7776]: I0219 03:18:53.385729 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/0.log" Feb 19 03:18:53.386423 master-0 kubenswrapper[7776]: I0219 03:18:53.386323 7776 generic.go:334] "Generic (PLEG): container finished" podID="a52be87c-e707-4269-96da-537708d52b64" containerID="246e246788c76f41235c1898d383b771146f06c3b5bc939889392a3b403a8a89" exitCode=1 Feb 19 03:18:53.386423 master-0 kubenswrapper[7776]: I0219 03:18:53.386379 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerDied","Data":"246e246788c76f41235c1898d383b771146f06c3b5bc939889392a3b403a8a89"} Feb 19 03:18:53.386685 master-0 kubenswrapper[7776]: I0219 03:18:53.386440 7776 scope.go:117] "RemoveContainer" containerID="f6706a38252937f6734b664a0f078763a45b428cf03e52f78ca141868385452d" Feb 19 03:18:53.387473 master-0 kubenswrapper[7776]: I0219 03:18:53.387385 7776 scope.go:117] "RemoveContainer" containerID="246e246788c76f41235c1898d383b771146f06c3b5bc939889392a3b403a8a89" Feb 19 03:18:53.764878 master-0 kubenswrapper[7776]: I0219 03:18:53.764726 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:53.764878 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:53.764878 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:53.764878 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:53.765907 master-0 kubenswrapper[7776]: I0219 03:18:53.765575 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:54.395679 master-0 kubenswrapper[7776]: I0219 03:18:54.395640 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/1.log" Feb 19 03:18:54.396364 master-0 kubenswrapper[7776]: I0219 03:18:54.396312 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rm5jg" event={"ID":"a52be87c-e707-4269-96da-537708d52b64","Type":"ContainerStarted","Data":"1178d105c0e052b61bfa106fd62d785aa7c58dcf39d0657b4779dea3c4c320eb"} Feb 19 03:18:54.450061 master-0 kubenswrapper[7776]: E0219 03:18:54.449941 7776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:54.450061 master-0 kubenswrapper[7776]: I0219 03:18:54.450017 7776 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 03:18:54.764425 master-0 kubenswrapper[7776]: I0219 03:18:54.764322 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:54.764425 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:54.764425 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:54.764425 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:54.764425 master-0 kubenswrapper[7776]: I0219 03:18:54.764402 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:55.764074 master-0 kubenswrapper[7776]: I0219 03:18:55.763983 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:55.764074 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:55.764074 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:55.764074 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:55.764762 master-0 kubenswrapper[7776]: I0219 03:18:55.764088 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:56.764588 master-0 kubenswrapper[7776]: I0219 03:18:56.764488 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:56.764588 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:56.764588 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:56.764588 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:56.764588 master-0 kubenswrapper[7776]: I0219 03:18:56.764576 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:57.763704 master-0 kubenswrapper[7776]: I0219 03:18:57.763625 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:57.763704 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:57.763704 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:57.763704 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:57.763704 master-0 kubenswrapper[7776]: I0219 03:18:57.763701 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:58.226661 master-0 kubenswrapper[7776]: E0219 03:18:58.226562 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:18:58.226661 master-0 kubenswrapper[7776]: E0219 03:18:58.226642 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:18:58.764015 master-0 kubenswrapper[7776]: I0219 03:18:58.763929 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:58.764015 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:58.764015 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:58.764015 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:58.764015 master-0 kubenswrapper[7776]: I0219 03:18:58.764004 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:18:59.764436 master-0 kubenswrapper[7776]: I0219 03:18:59.764306 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:18:59.764436 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:18:59.764436 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:18:59.764436 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:18:59.764436 master-0 kubenswrapper[7776]: I0219 03:18:59.764392 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:00.764730 master-0 kubenswrapper[7776]: I0219 03:19:00.764592 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:00.764730 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:00.764730 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:00.764730 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:00.766022 master-0 kubenswrapper[7776]: I0219 03:19:00.764734 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:01.763998 master-0 kubenswrapper[7776]: I0219 03:19:01.763874 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:01.763998 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:01.763998 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:01.763998 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:01.764391 master-0 kubenswrapper[7776]: I0219 03:19:01.764009 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:02.768167 master-0 kubenswrapper[7776]: I0219 03:19:02.768102 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:02.768167 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:02.768167 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:02.768167 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:02.768167 master-0 kubenswrapper[7776]: I0219 03:19:02.768163 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:03.764106 master-0 kubenswrapper[7776]: I0219 03:19:03.764042 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:03.764106 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:03.764106 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:03.764106 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:03.764469 master-0 kubenswrapper[7776]: I0219 03:19:03.764114 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:03.850529 master-0 kubenswrapper[7776]: I0219 03:19:03.850443 7776 status_manager.go:851] "Failed to get status for pod" podUID="18a83278819db2092fa26d8274eb3f00" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Feb 19 03:19:04.419538 master-0 kubenswrapper[7776]: E0219 03:19:04.419486 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:19:04.419538 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649" Netns:"/var/run/netns/e9779647-87c0-4e6f-8290-638f2bbfb117" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:19:04.419538 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:19:04.419538 master-0 kubenswrapper[7776]: > Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: E0219 03:19:04.419563 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649" Netns:"/var/run/netns/e9779647-87c0-4e6f-8290-638f2bbfb117" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: E0219 03:19:04.419585 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649" Netns:"/var/run/netns/e9779647-87c0-4e6f-8290-638f2bbfb117" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:19:04.419869 master-0 kubenswrapper[7776]: E0219 03:19:04.419646 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649\\\" Netns:\\\"/var/run/netns/e9779647-87c0-4e6f-8290-638f2bbfb117\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-3-master-0" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" Feb 19 03:19:04.450902 master-0 kubenswrapper[7776]: E0219 03:19:04.450795 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 19 03:19:04.467132 master-0 kubenswrapper[7776]: I0219 03:19:04.467021 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:19:04.467959 master-0 kubenswrapper[7776]: I0219 03:19:04.467901 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:19:04.764136 master-0 kubenswrapper[7776]: I0219 03:19:04.764004 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:04.764136 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:04.764136 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:04.764136 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:04.764136 master-0 kubenswrapper[7776]: I0219 03:19:04.764071 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:05.764556 master-0 kubenswrapper[7776]: I0219 03:19:05.764440 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:05.764556 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:05.764556 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:05.764556 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:05.765571 master-0 kubenswrapper[7776]: I0219 03:19:05.764551 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:06.764154 master-0 kubenswrapper[7776]: I0219 03:19:06.764093 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:06.764154 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:06.764154 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:06.764154 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:06.765439 master-0 kubenswrapper[7776]: I0219 03:19:06.765396 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:07.765086 master-0 kubenswrapper[7776]: I0219 03:19:07.765003 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:07.765086 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:07.765086 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:07.765086 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:07.765086 master-0 kubenswrapper[7776]: I0219 03:19:07.765080 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:08.764774 master-0 kubenswrapper[7776]: I0219 03:19:08.764723 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:08.764774 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:08.764774 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:08.764774 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:08.765953 master-0 kubenswrapper[7776]: I0219 03:19:08.764795 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:09.765436 master-0 kubenswrapper[7776]: I0219 03:19:09.765316 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:09.765436 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:09.765436 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:09.765436 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:09.766470 master-0 kubenswrapper[7776]: I0219 03:19:09.765436 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:10.764552 master-0 kubenswrapper[7776]: I0219 03:19:10.764477 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:10.764552 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:10.764552 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:10.764552 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:10.764850 master-0 kubenswrapper[7776]: I0219 03:19:10.764569 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:11.712528 master-0 kubenswrapper[7776]: E0219 03:19:11.712363 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-7b65dc9fcb-t6jnq.1895876c1edc180b openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7b65dc9fcb-t6jnq,UID:76470062-ab83-47ed-a669-deeb71996548,APIVersion:v1,ResourceVersion:10192,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:15:26.884116491 +0000 UTC m=+633.223801019,LastTimestamp:2026-02-19 03:18:13.893642027 +0000 UTC m=+800.233326545,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:19:11.763433 master-0 kubenswrapper[7776]: I0219 03:19:11.763364 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:11.763433 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:11.763433 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:11.763433 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:11.763433 master-0 kubenswrapper[7776]: I0219 03:19:11.763427 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:12.764462 master-0 kubenswrapper[7776]: I0219 03:19:12.764312 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:12.764462 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:12.764462 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:12.764462 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:12.765208 master-0 kubenswrapper[7776]: I0219 03:19:12.764464 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:13.764046 master-0 kubenswrapper[7776]: I0219 03:19:13.763961 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:13.764046 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:13.764046 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:13.764046 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:13.770328 master-0 kubenswrapper[7776]: I0219 03:19:13.764057 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:14.653413 master-0 kubenswrapper[7776]: E0219 03:19:14.653290 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 19 03:19:14.764274 master-0 kubenswrapper[7776]: I0219 03:19:14.764192 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:14.764274 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:14.764274 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:14.764274 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:14.764561 master-0 kubenswrapper[7776]: I0219 03:19:14.764292 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:15.764202 master-0 kubenswrapper[7776]: I0219 03:19:15.764107 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:15.764202 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:15.764202 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:15.764202 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:15.765249 master-0 kubenswrapper[7776]: I0219 03:19:15.764215 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:16.764677 master-0 kubenswrapper[7776]: I0219 03:19:16.764588 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:16.764677 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:16.764677 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:16.764677 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:16.765457 master-0 kubenswrapper[7776]: I0219 03:19:16.764685 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:17.764286 master-0 kubenswrapper[7776]: I0219 03:19:17.764187 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:17.764286 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:17.764286 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:17.764286 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:17.764603 master-0 kubenswrapper[7776]: I0219 03:19:17.764286 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:17.863529 master-0 kubenswrapper[7776]: E0219 03:19:17.863470 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:19:17.864129 master-0 kubenswrapper[7776]: I0219 03:19:17.864101 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 19 03:19:17.881999 master-0 kubenswrapper[7776]: W0219 03:19:17.881967 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7 WatchSource:0}: Error finding container 6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7: Status 404 returned error can't find the container with id 6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7 Feb 19 03:19:18.577012 master-0 kubenswrapper[7776]: I0219 03:19:18.576905 7776 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="49d6109e593a1f6854e4a23b0f0809b7c8251c11ffac6d5d3c63dd533a448342" exitCode=0 Feb 19 03:19:18.577012 master-0 kubenswrapper[7776]: I0219 03:19:18.576995 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"49d6109e593a1f6854e4a23b0f0809b7c8251c11ffac6d5d3c63dd533a448342"} Feb 19 03:19:18.577580 master-0 kubenswrapper[7776]: I0219 03:19:18.577082 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7"} Feb 19 03:19:18.577788 master-0 kubenswrapper[7776]: I0219 03:19:18.577716 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:19:18.577788 master-0 kubenswrapper[7776]: I0219 03:19:18.577761 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:19:18.599804 master-0 kubenswrapper[7776]: E0219 03:19:18.599546 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:19:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:19:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:19:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:19:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:19:18.763580 master-0 kubenswrapper[7776]: I0219 03:19:18.763484 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:18.763580 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:18.763580 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:18.763580 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:18.763897 master-0 kubenswrapper[7776]: I0219 03:19:18.763593 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:19.764387 master-0 kubenswrapper[7776]: I0219 03:19:19.764299 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:19.764387 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:19.764387 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:19.764387 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:19.764387 master-0 kubenswrapper[7776]: I0219 03:19:19.764370 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:20.763880 master-0 kubenswrapper[7776]: I0219 03:19:20.763821 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:20.763880 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:20.763880 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:20.763880 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:20.764454 master-0 kubenswrapper[7776]: I0219 03:19:20.764414 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:21.763893 master-0 kubenswrapper[7776]: I0219 03:19:21.763767 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:21.763893 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:21.763893 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:21.763893 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:21.763893 master-0 kubenswrapper[7776]: I0219 03:19:21.763830 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:22.764631 master-0 kubenswrapper[7776]: I0219 03:19:22.764541 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:22.764631 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:22.764631 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:22.764631 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:22.764631 master-0 kubenswrapper[7776]: I0219 03:19:22.764631 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:23.763820 master-0 kubenswrapper[7776]: I0219 03:19:23.763740 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:23.763820 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:23.763820 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:23.763820 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:23.764150 master-0 kubenswrapper[7776]: I0219 03:19:23.763850 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:24.764511 master-0 kubenswrapper[7776]: I0219 03:19:24.764418 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:24.764511 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:24.764511 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:24.764511 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:24.765287 master-0 kubenswrapper[7776]: I0219 03:19:24.764569 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:25.054745 master-0 kubenswrapper[7776]: E0219 03:19:25.054548 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 19 03:19:25.774281 master-0 kubenswrapper[7776]: I0219 03:19:25.774192 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:25.774281 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:25.774281 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:25.774281 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:25.774281 master-0 kubenswrapper[7776]: I0219 03:19:25.774277 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:26.763427 master-0 kubenswrapper[7776]: I0219 03:19:26.763339 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:26.763427 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:26.763427 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:26.763427 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:26.763427 master-0 kubenswrapper[7776]: I0219 03:19:26.763427 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:27.763611 master-0 kubenswrapper[7776]: I0219 03:19:27.763561 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:27.763611 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:27.763611 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:27.763611 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:27.764450 master-0 kubenswrapper[7776]: I0219 03:19:27.764385 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:28.600964 master-0 kubenswrapper[7776]: E0219 03:19:28.600878 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:19:28.764214 master-0 kubenswrapper[7776]: I0219 03:19:28.764108 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:28.764214 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:28.764214 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:28.764214 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:28.764214 master-0 kubenswrapper[7776]: I0219 03:19:28.764182 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:29.764701 master-0 kubenswrapper[7776]: I0219 03:19:29.764591 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:29.764701 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:29.764701 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:29.764701 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:29.765735 master-0 kubenswrapper[7776]: I0219 03:19:29.764701 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:30.764347 master-0 kubenswrapper[7776]: I0219 03:19:30.764013 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:30.764347 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:30.764347 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:30.764347 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:30.764347 master-0 kubenswrapper[7776]: I0219 03:19:30.764138 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:31.764379 master-0 kubenswrapper[7776]: I0219 03:19:31.764320 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:31.764379 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:31.764379 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:31.764379 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:31.765160 master-0 kubenswrapper[7776]: I0219 03:19:31.764392 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:32.766726 master-0 kubenswrapper[7776]: I0219 03:19:32.766667 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:32.766726 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:32.766726 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:32.766726 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:32.767427 master-0 kubenswrapper[7776]: I0219 03:19:32.766755 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:33.763591 master-0 kubenswrapper[7776]: I0219 03:19:33.763521 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:33.763591 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:33.763591 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:33.763591 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:33.763591 master-0 kubenswrapper[7776]: I0219 03:19:33.763592 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:34.764119 master-0 kubenswrapper[7776]: I0219 03:19:34.764013 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:34.764119 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:34.764119 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:34.764119 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:34.764119 master-0 kubenswrapper[7776]: I0219 03:19:34.764095 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:35.764011 master-0 kubenswrapper[7776]: I0219 03:19:35.763875 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:35.764011 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:35.764011 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:35.764011 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:35.765021 master-0 kubenswrapper[7776]: I0219 03:19:35.764013 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:35.855345 master-0 kubenswrapper[7776]: E0219 03:19:35.855264 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 19 03:19:36.764699 master-0 kubenswrapper[7776]: I0219 03:19:36.764645 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:36.764699 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:36.764699 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:36.764699 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:36.765573 master-0 kubenswrapper[7776]: I0219 03:19:36.765491 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:37.764389 master-0 kubenswrapper[7776]: I0219 03:19:37.764314 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:37.764389 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:37.764389 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:37.764389 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:37.765144 master-0 kubenswrapper[7776]: I0219 03:19:37.764418 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:38.601351 master-0 kubenswrapper[7776]: E0219 03:19:38.601236 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:19:38.764291 master-0 kubenswrapper[7776]: I0219 03:19:38.764213 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:38.764291 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:38.764291 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:38.764291 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:38.764291 master-0 kubenswrapper[7776]: I0219 03:19:38.764284 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:39.764098 master-0 kubenswrapper[7776]: I0219 03:19:39.763990 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:39.764098 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:39.764098 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:39.764098 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:39.764098 master-0 kubenswrapper[7776]: I0219 03:19:39.764089 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:40.763720 master-0 kubenswrapper[7776]: I0219 03:19:40.763600 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:40.763720 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:40.763720 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:40.763720 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:40.763720 master-0 kubenswrapper[7776]: I0219 03:19:40.763711 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:41.763836 master-0 kubenswrapper[7776]: I0219 03:19:41.763753 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:41.763836 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:41.763836 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:41.763836 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:41.764576 master-0 kubenswrapper[7776]: I0219 03:19:41.763839 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:42.764761 master-0 kubenswrapper[7776]: I0219 03:19:42.764644 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:42.764761 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:42.764761 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:42.764761 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:42.764761 master-0 kubenswrapper[7776]: I0219 03:19:42.764749 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:43.764380 master-0 kubenswrapper[7776]: I0219 03:19:43.764293 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:43.764380 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:43.764380 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:43.764380 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:43.765221 master-0 kubenswrapper[7776]: I0219 03:19:43.764386 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:44.763226 master-0 kubenswrapper[7776]: I0219 03:19:44.763170 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:44.763226 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:44.763226 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:44.763226 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:44.763491 master-0 kubenswrapper[7776]: I0219 03:19:44.763241 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:45.716274 master-0 kubenswrapper[7776]: E0219 03:19:45.716134 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-6569778c84-qcd49.1895874ad965c6f0 openshift-ingress-operator 12429 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-6569778c84-qcd49,UID:9ff96ce8-6427-4a42-afa6-8b8bc778f094,APIVersion:v1,ResourceVersion:3479,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:13:03 +0000 UTC,LastTimestamp:2026-02-19 03:18:16.842968687 +0000 UTC m=+803.182653205,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:19:45.763613 master-0 kubenswrapper[7776]: I0219 03:19:45.763546 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:45.763613 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:45.763613 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:45.763613 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:45.763952 master-0 kubenswrapper[7776]: I0219 03:19:45.763622 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:46.765160 master-0 kubenswrapper[7776]: I0219 03:19:46.765076 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:46.765160 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:46.765160 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:46.765160 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:46.766046 master-0 kubenswrapper[7776]: I0219 03:19:46.765178 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:47.457390 master-0 kubenswrapper[7776]: E0219 03:19:47.457293 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 19 03:19:47.766455 master-0 kubenswrapper[7776]: I0219 03:19:47.764183 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:47.766455 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:47.766455 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:47.766455 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:47.766455 master-0 kubenswrapper[7776]: I0219 03:19:47.764303 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:48.602625 master-0 kubenswrapper[7776]: E0219 03:19:48.602532 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:19:48.764575 master-0 kubenswrapper[7776]: I0219 03:19:48.764498 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:48.764575 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:48.764575 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:48.764575 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:48.764575 master-0 kubenswrapper[7776]: I0219 03:19:48.764568 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:49.764281 master-0 kubenswrapper[7776]: I0219 03:19:49.764162 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:49.764281 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:49.764281 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:49.764281 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:49.764281 master-0 kubenswrapper[7776]: I0219 03:19:49.764271 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:50.764765 master-0 kubenswrapper[7776]: I0219 03:19:50.764674 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:50.764765 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:50.764765 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:50.764765 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:50.765361 master-0 kubenswrapper[7776]: I0219 03:19:50.764789 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:51.764303 master-0 kubenswrapper[7776]: I0219 03:19:51.764185 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:51.764303 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:51.764303 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:51.764303 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:51.764303 master-0 kubenswrapper[7776]: I0219 03:19:51.764300 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:52.580823 master-0 kubenswrapper[7776]: E0219 03:19:52.580704 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:19:52.765131 master-0 kubenswrapper[7776]: I0219 03:19:52.765074 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:52.765131 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:52.765131 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:52.765131 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:52.765414 master-0 kubenswrapper[7776]: I0219 03:19:52.765139 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:53.763572 master-0 kubenswrapper[7776]: I0219 03:19:53.763510 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:53.763572 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:53.763572 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:53.763572 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:53.763572 master-0 kubenswrapper[7776]: I0219 03:19:53.763569 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:53.837400 master-0 kubenswrapper[7776]: I0219 03:19:53.837279 7776 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="60f5cf312ba315b685c25de92b9f8cc980f0c49a86698d8a695e2b600355cacd" exitCode=0 Feb 19 03:19:53.837666 master-0 kubenswrapper[7776]: I0219 03:19:53.837326 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"60f5cf312ba315b685c25de92b9f8cc980f0c49a86698d8a695e2b600355cacd"} Feb 19 03:19:53.838844 master-0 kubenswrapper[7776]: I0219 03:19:53.838550 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:19:53.838844 master-0 kubenswrapper[7776]: I0219 03:19:53.838629 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:19:54.765674 master-0 kubenswrapper[7776]: I0219 03:19:54.765598 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:54.765674 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:54.765674 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:54.765674 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:54.766464 master-0 kubenswrapper[7776]: I0219 03:19:54.765710 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:54.848350 master-0 kubenswrapper[7776]: I0219 03:19:54.848251 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:19:54.849385 master-0 kubenswrapper[7776]: I0219 03:19:54.849341 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 03:19:54.849466 master-0 kubenswrapper[7776]: I0219 03:19:54.849414 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="e91ffe706d1ad6df0dfe02b5098676d02a6c7e690163f70c0b4d651c88fb78ce" exitCode=1 Feb 19 03:19:54.849532 master-0 kubenswrapper[7776]: I0219 03:19:54.849503 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"e91ffe706d1ad6df0dfe02b5098676d02a6c7e690163f70c0b4d651c88fb78ce"} Feb 19 03:19:54.850027 master-0 kubenswrapper[7776]: I0219 03:19:54.849994 7776 scope.go:117] "RemoveContainer" containerID="e91ffe706d1ad6df0dfe02b5098676d02a6c7e690163f70c0b4d651c88fb78ce" Feb 19 03:19:54.852593 master-0 kubenswrapper[7776]: I0219 03:19:54.852520 7776 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108" exitCode=0 Feb 19 03:19:54.852593 master-0 kubenswrapper[7776]: I0219 03:19:54.852563 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerDied","Data":"eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108"} Feb 19 03:19:54.852715 master-0 kubenswrapper[7776]: I0219 03:19:54.852628 7776 scope.go:117] "RemoveContainer" containerID="a2bdec17dc1089972433ebc1bc1c16d0f4ac7fa020f8058705381c276b86bced" Feb 19 03:19:54.853796 master-0 kubenswrapper[7776]: I0219 03:19:54.853752 7776 scope.go:117] "RemoveContainer" containerID="eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108" Feb 19 03:19:55.763409 master-0 kubenswrapper[7776]: I0219 03:19:55.763338 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:55.763409 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:55.763409 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:55.763409 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:55.763806 master-0 kubenswrapper[7776]: I0219 03:19:55.763424 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:55.862187 master-0 kubenswrapper[7776]: I0219 03:19:55.862128 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"eea9b0c6ce5430374ed8497b41ddc2add12c790b9231a25ef012e069c8a74ede"} Feb 19 03:19:55.863136 master-0 kubenswrapper[7776]: I0219 03:19:55.862447 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:19:55.864972 master-0 kubenswrapper[7776]: I0219 03:19:55.864714 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:19:55.866993 master-0 kubenswrapper[7776]: I0219 03:19:55.866916 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:19:55.868185 master-0 kubenswrapper[7776]: I0219 03:19:55.868142 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 03:19:55.868313 master-0 kubenswrapper[7776]: I0219 03:19:55.868236 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"6fb62379cb958f081a421ce8d12b9b9668be88db7dba4d6dcdffa112e9e319cf"} Feb 19 03:19:56.764917 master-0 kubenswrapper[7776]: I0219 03:19:56.764817 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:56.764917 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:56.764917 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:56.764917 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:56.765409 master-0 kubenswrapper[7776]: I0219 03:19:56.764931 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:57.763101 master-0 kubenswrapper[7776]: I0219 03:19:57.763027 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:57.763101 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:57.763101 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:57.763101 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:57.764122 master-0 kubenswrapper[7776]: I0219 03:19:57.763112 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:58.603970 master-0 kubenswrapper[7776]: E0219 03:19:58.603869 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:19:58.603970 master-0 kubenswrapper[7776]: E0219 03:19:58.603967 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:19:58.763088 master-0 kubenswrapper[7776]: I0219 03:19:58.763006 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:58.763088 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:58.763088 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:58.763088 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:58.763088 master-0 kubenswrapper[7776]: I0219 03:19:58.763071 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:59.763694 master-0 kubenswrapper[7776]: I0219 03:19:59.763587 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:19:59.763694 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:19:59.763694 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:19:59.763694 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:19:59.764755 master-0 kubenswrapper[7776]: I0219 03:19:59.763731 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:19:59.902670 master-0 kubenswrapper[7776]: I0219 03:19:59.902578 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/2.log" Feb 19 03:19:59.903363 master-0 kubenswrapper[7776]: I0219 03:19:59.903311 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/1.log" Feb 19 03:19:59.903510 master-0 kubenswrapper[7776]: I0219 03:19:59.903393 7776 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5" exitCode=1 Feb 19 03:19:59.903510 master-0 kubenswrapper[7776]: I0219 03:19:59.903439 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5"} Feb 19 03:19:59.903510 master-0 kubenswrapper[7776]: I0219 03:19:59.903500 7776 scope.go:117] "RemoveContainer" containerID="06265a4a0b6f3c8a8128f95451a5945a8bbe001ae9ab38435a2630dfd4fd6aa3" Feb 19 03:19:59.904387 master-0 kubenswrapper[7776]: I0219 03:19:59.904330 7776 scope.go:117] "RemoveContainer" containerID="6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5" Feb 19 03:19:59.904980 master-0 kubenswrapper[7776]: E0219 03:19:59.904923 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:20:00.658527 master-0 kubenswrapper[7776]: E0219 03:20:00.658399 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 19 03:20:00.764238 master-0 kubenswrapper[7776]: I0219 03:20:00.764139 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:00.764238 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:00.764238 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:00.764238 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:00.765496 master-0 kubenswrapper[7776]: I0219 03:20:00.764282 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:00.914858 master-0 kubenswrapper[7776]: I0219 03:20:00.914660 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/2.log" Feb 19 03:20:01.791303 master-0 kubenswrapper[7776]: I0219 03:20:01.787505 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:01.791303 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:01.791303 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:01.791303 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:01.791303 master-0 kubenswrapper[7776]: I0219 03:20:01.787611 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:02.764187 master-0 kubenswrapper[7776]: I0219 03:20:02.764040 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:02.764187 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:02.764187 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:02.764187 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:02.764187 master-0 kubenswrapper[7776]: I0219 03:20:02.764139 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:02.940965 master-0 kubenswrapper[7776]: I0219 03:20:02.940914 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:20:02.941819 master-0 kubenswrapper[7776]: I0219 03:20:02.941361 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/config-sync-controllers/0.log" Feb 19 03:20:02.941912 master-0 kubenswrapper[7776]: I0219 03:20:02.941883 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 03:20:02.941982 master-0 kubenswrapper[7776]: I0219 03:20:02.941941 7776 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="c9a8948e6182f0cdb976b661c449d741ee645d844809a7695d74084a213ff139" exitCode=1 Feb 19 03:20:02.941982 master-0 kubenswrapper[7776]: I0219 03:20:02.941974 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerDied","Data":"c9a8948e6182f0cdb976b661c449d741ee645d844809a7695d74084a213ff139"} Feb 19 03:20:02.942653 master-0 kubenswrapper[7776]: I0219 03:20:02.942611 7776 scope.go:117] "RemoveContainer" containerID="c9a8948e6182f0cdb976b661c449d741ee645d844809a7695d74084a213ff139" Feb 19 03:20:03.762801 master-0 kubenswrapper[7776]: I0219 03:20:03.762731 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:03.762801 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:03.762801 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:03.762801 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:03.763172 master-0 kubenswrapper[7776]: I0219 03:20:03.762798 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:03.852628 master-0 kubenswrapper[7776]: I0219 03:20:03.852568 7776 status_manager.go:851] "Failed to get status for pod" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" pod="openshift-kube-scheduler/installer-5-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-5-master-0)" Feb 19 03:20:03.952358 master-0 kubenswrapper[7776]: I0219 03:20:03.952318 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:20:03.952908 master-0 kubenswrapper[7776]: I0219 03:20:03.952708 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/config-sync-controllers/0.log" Feb 19 03:20:03.953211 master-0 kubenswrapper[7776]: I0219 03:20:03.953193 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 03:20:03.953311 master-0 kubenswrapper[7776]: I0219 03:20:03.953243 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" event={"ID":"af2be4f9-f632-4a72-8f39-c96954403edc","Type":"ContainerStarted","Data":"289ee53b4c19c9048207b2aeee6abce4663f9f9480628fbf0096fab799c9641b"} Feb 19 03:20:04.763903 master-0 kubenswrapper[7776]: I0219 03:20:04.763727 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:04.763903 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:04.763903 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:04.763903 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:04.763903 master-0 kubenswrapper[7776]: I0219 03:20:04.763823 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: E0219 03:20:05.195152 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933" Netns:"/var/run/netns/87da5854-f7bf-40a9-84cc-aca75f08b895" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: > Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: E0219 03:20:05.195289 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933" Netns:"/var/run/netns/87da5854-f7bf-40a9-84cc-aca75f08b895" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:20:05.195299 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:20:05.197010 master-0 kubenswrapper[7776]: E0219 03:20:05.195323 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:20:05.197010 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933" Netns:"/var/run/netns/87da5854-f7bf-40a9-84cc-aca75f08b895" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:20:05.197010 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:20:05.197010 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:20:05.197010 master-0 kubenswrapper[7776]: E0219 03:20:05.195422 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933\\\" Netns:\\\"/var/run/netns/87da5854-f7bf-40a9-84cc-aca75f08b895\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-3-master-0" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" Feb 19 03:20:05.764802 master-0 kubenswrapper[7776]: I0219 03:20:05.764701 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:05.764802 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:05.764802 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:05.764802 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:05.765251 master-0 kubenswrapper[7776]: I0219 03:20:05.764813 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:05.968803 master-0 kubenswrapper[7776]: I0219 03:20:05.968726 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:20:05.969640 master-0 kubenswrapper[7776]: I0219 03:20:05.969549 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:20:06.764142 master-0 kubenswrapper[7776]: I0219 03:20:06.763980 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:06.764142 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:06.764142 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:06.764142 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:06.764142 master-0 kubenswrapper[7776]: I0219 03:20:06.764065 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:06.977461 master-0 kubenswrapper[7776]: I0219 03:20:06.977392 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/1.log" Feb 19 03:20:06.978738 master-0 kubenswrapper[7776]: I0219 03:20:06.978692 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/0.log" Feb 19 03:20:06.978845 master-0 kubenswrapper[7776]: I0219 03:20:06.978767 7776 generic.go:334] "Generic (PLEG): container finished" podID="8f7d8fc8-c313-416f-b62b-b54db9944066" containerID="027172ba4dcd10cd3e3177cc36691683dffc4cdf627b8d23cdb2d10cafe015ef" exitCode=1 Feb 19 03:20:06.978845 master-0 kubenswrapper[7776]: I0219 03:20:06.978814 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerDied","Data":"027172ba4dcd10cd3e3177cc36691683dffc4cdf627b8d23cdb2d10cafe015ef"} Feb 19 03:20:06.978920 master-0 kubenswrapper[7776]: I0219 03:20:06.978862 7776 scope.go:117] "RemoveContainer" containerID="63e9da7bba52316e4ecf529d81e030bb4b7c5317fbd6fe3da25ae598ba0cf3f5" Feb 19 03:20:06.979799 master-0 kubenswrapper[7776]: I0219 03:20:06.979752 7776 scope.go:117] "RemoveContainer" containerID="027172ba4dcd10cd3e3177cc36691683dffc4cdf627b8d23cdb2d10cafe015ef" Feb 19 03:20:07.763846 master-0 kubenswrapper[7776]: I0219 03:20:07.763733 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:07.763846 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:07.763846 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:07.763846 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:07.763846 master-0 kubenswrapper[7776]: I0219 03:20:07.763835 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:07.990181 master-0 kubenswrapper[7776]: I0219 03:20:07.990092 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/1.log" Feb 19 03:20:07.991048 master-0 kubenswrapper[7776]: I0219 03:20:07.990985 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" event={"ID":"8f7d8fc8-c313-416f-b62b-b54db9944066","Type":"ContainerStarted","Data":"c0baa91d6eaf13fe6edc3c13ccd1a3b040274a7e0c0212a31409ddffc6abe656"} Feb 19 03:20:07.991314 master-0 kubenswrapper[7776]: I0219 03:20:07.991236 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:20:08.763550 master-0 kubenswrapper[7776]: I0219 03:20:08.763490 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:08.763550 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:08.763550 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:08.763550 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:08.763863 master-0 kubenswrapper[7776]: I0219 03:20:08.763584 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:09.764009 master-0 kubenswrapper[7776]: I0219 03:20:09.763914 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:09.764009 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:09.764009 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:09.764009 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:09.765005 master-0 kubenswrapper[7776]: I0219 03:20:09.764007 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:10.764111 master-0 kubenswrapper[7776]: I0219 03:20:10.764027 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:10.764111 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:10.764111 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:10.764111 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:10.765242 master-0 kubenswrapper[7776]: I0219 03:20:10.764119 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:11.763361 master-0 kubenswrapper[7776]: I0219 03:20:11.763288 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:11.763361 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:11.763361 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:11.763361 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:11.763361 master-0 kubenswrapper[7776]: I0219 03:20:11.763358 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:12.492191 master-0 kubenswrapper[7776]: I0219 03:20:12.492080 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:20:12.764314 master-0 kubenswrapper[7776]: I0219 03:20:12.764097 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:12.764314 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:12.764314 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:12.764314 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:12.764314 master-0 kubenswrapper[7776]: I0219 03:20:12.764201 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:13.764192 master-0 kubenswrapper[7776]: I0219 03:20:13.764055 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:13.764192 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:13.764192 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:13.764192 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:13.764192 master-0 kubenswrapper[7776]: I0219 03:20:13.764141 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:13.842978 master-0 kubenswrapper[7776]: I0219 03:20:13.842877 7776 scope.go:117] "RemoveContainer" containerID="6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5" Feb 19 03:20:13.843352 master-0 kubenswrapper[7776]: E0219 03:20:13.843243 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:20:14.041671 master-0 kubenswrapper[7776]: I0219 03:20:14.041533 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/1.log" Feb 19 03:20:14.042885 master-0 kubenswrapper[7776]: I0219 03:20:14.042821 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/0.log" Feb 19 03:20:14.043546 master-0 kubenswrapper[7776]: I0219 03:20:14.043500 7776 generic.go:334] "Generic (PLEG): container finished" podID="7012676e-f35d-46e5-83e8-a63172dd076e" containerID="85c05765f6dadb3299427fcae734f7bc6d46d71d6d24a21ddaf8cbc81b5c9220" exitCode=1 Feb 19 03:20:14.043621 master-0 kubenswrapper[7776]: I0219 03:20:14.043558 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerDied","Data":"85c05765f6dadb3299427fcae734f7bc6d46d71d6d24a21ddaf8cbc81b5c9220"} Feb 19 03:20:14.043684 master-0 kubenswrapper[7776]: I0219 03:20:14.043665 7776 scope.go:117] "RemoveContainer" containerID="63378086041fcb0de956f1a5a160faad6c0e85b100c25eacbce569a26a79079c" Feb 19 03:20:14.044531 master-0 kubenswrapper[7776]: I0219 03:20:14.044479 7776 scope.go:117] "RemoveContainer" containerID="85c05765f6dadb3299427fcae734f7bc6d46d71d6d24a21ddaf8cbc81b5c9220" Feb 19 03:20:14.764185 master-0 kubenswrapper[7776]: I0219 03:20:14.764109 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:20:14.764185 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:20:14.764185 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:20:14.764185 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:20:14.764185 master-0 kubenswrapper[7776]: I0219 03:20:14.764172 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:20:14.764928 master-0 kubenswrapper[7776]: I0219 03:20:14.764221 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:20:14.764928 master-0 kubenswrapper[7776]: I0219 03:20:14.764819 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366"} pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" containerMessage="Container router failed startup probe, will be restarted" Feb 19 03:20:14.764928 master-0 kubenswrapper[7776]: I0219 03:20:14.764854 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" containerID="cri-o://047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" gracePeriod=3600 Feb 19 03:20:15.064732 master-0 kubenswrapper[7776]: I0219 03:20:15.064591 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/1.log" Feb 19 03:20:15.065315 master-0 kubenswrapper[7776]: I0219 03:20:15.065225 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" event={"ID":"7012676e-f35d-46e5-83e8-a63172dd076e","Type":"ContainerStarted","Data":"8894bf86e799bc0054b0d557a37a40df23510a6a95f11afcf342bac5b106862c"} Feb 19 03:20:15.065740 master-0 kubenswrapper[7776]: I0219 03:20:15.065697 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:20:17.061017 master-0 kubenswrapper[7776]: E0219 03:20:17.060899 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:20:18.733393 master-0 kubenswrapper[7776]: E0219 03:20:18.733049 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:20:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:20:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:20:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:20:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:19.720219 master-0 kubenswrapper[7776]: E0219 03:20:19.720011 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895878752bd8e1d openshift-kube-controller-manager 11907 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:50eac3d8c63234f2a49e98044c0d4f67,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:17:23 +0000 UTC,LastTimestamp:2026-02-19 03:18:18.920097545 +0000 UTC m=+805.259782093,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:20:20.969983 master-0 kubenswrapper[7776]: I0219 03:20:20.969907 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:20:25.843775 master-0 kubenswrapper[7776]: I0219 03:20:25.843695 7776 scope.go:117] "RemoveContainer" containerID="6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5" Feb 19 03:20:26.154886 master-0 kubenswrapper[7776]: I0219 03:20:26.154795 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/2.log" Feb 19 03:20:26.155305 master-0 kubenswrapper[7776]: I0219 03:20:26.154906 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff"} Feb 19 03:20:27.842073 master-0 kubenswrapper[7776]: E0219 03:20:27.842001 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:20:28.175913 master-0 kubenswrapper[7776]: I0219 03:20:28.175844 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"0b461f34d367324dba43f9d8dc1f9f03674c68ca7ee50c7c17368a3d5dc7170e"} Feb 19 03:20:28.176483 master-0 kubenswrapper[7776]: I0219 03:20:28.176455 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:20:28.176532 master-0 kubenswrapper[7776]: I0219 03:20:28.176488 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:20:28.734446 master-0 kubenswrapper[7776]: E0219 03:20:28.734371 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:29.190453 master-0 kubenswrapper[7776]: I0219 03:20:29.190398 7776 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="0b461f34d367324dba43f9d8dc1f9f03674c68ca7ee50c7c17368a3d5dc7170e" exitCode=0 Feb 19 03:20:29.190453 master-0 kubenswrapper[7776]: I0219 03:20:29.190440 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"0b461f34d367324dba43f9d8dc1f9f03674c68ca7ee50c7c17368a3d5dc7170e"} Feb 19 03:20:33.222558 master-0 kubenswrapper[7776]: I0219 03:20:33.222476 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/0.log" Feb 19 03:20:33.223622 master-0 kubenswrapper[7776]: I0219 03:20:33.222567 7776 generic.go:334] "Generic (PLEG): container finished" podID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" containerID="c7efec73ecd5959e325f34dc1abcbd0a0ee696d09e18dbddaa6606e552d9257d" exitCode=1 Feb 19 03:20:33.223622 master-0 kubenswrapper[7776]: I0219 03:20:33.222670 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerDied","Data":"c7efec73ecd5959e325f34dc1abcbd0a0ee696d09e18dbddaa6606e552d9257d"} Feb 19 03:20:33.223622 master-0 kubenswrapper[7776]: I0219 03:20:33.223300 7776 scope.go:117] "RemoveContainer" containerID="c7efec73ecd5959e325f34dc1abcbd0a0ee696d09e18dbddaa6606e552d9257d" Feb 19 03:20:33.228101 master-0 kubenswrapper[7776]: I0219 03:20:33.228036 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5_0664d88f-f697-4182-93cd-f208ff6f3ac2/control-plane-machine-set-operator/0.log" Feb 19 03:20:33.228241 master-0 kubenswrapper[7776]: I0219 03:20:33.228120 7776 generic.go:334] "Generic (PLEG): container finished" podID="0664d88f-f697-4182-93cd-f208ff6f3ac2" containerID="47c00fb2c67d340bd7a8f33cdbea3ac43d78e7ccbf383a58ca7fe0117068da43" exitCode=1 Feb 19 03:20:33.228407 master-0 kubenswrapper[7776]: I0219 03:20:33.228178 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" event={"ID":"0664d88f-f697-4182-93cd-f208ff6f3ac2","Type":"ContainerDied","Data":"47c00fb2c67d340bd7a8f33cdbea3ac43d78e7ccbf383a58ca7fe0117068da43"} Feb 19 03:20:33.229620 master-0 kubenswrapper[7776]: I0219 03:20:33.229545 7776 scope.go:117] "RemoveContainer" containerID="47c00fb2c67d340bd7a8f33cdbea3ac43d78e7ccbf383a58ca7fe0117068da43" Feb 19 03:20:34.062515 master-0 kubenswrapper[7776]: E0219 03:20:34.062406 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:20:34.237465 master-0 kubenswrapper[7776]: I0219 03:20:34.237396 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/0.log" Feb 19 03:20:34.238305 master-0 kubenswrapper[7776]: I0219 03:20:34.237522 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a"} Feb 19 03:20:34.239751 master-0 kubenswrapper[7776]: I0219 03:20:34.239692 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5_0664d88f-f697-4182-93cd-f208ff6f3ac2/control-plane-machine-set-operator/0.log" Feb 19 03:20:34.239924 master-0 kubenswrapper[7776]: I0219 03:20:34.239752 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" event={"ID":"0664d88f-f697-4182-93cd-f208ff6f3ac2","Type":"ContainerStarted","Data":"4b6a73b02e77bad2b6e9ef27089a4e1a7f0f484513ec913c5624e7c0bc68c6c7"} Feb 19 03:20:37.270469 master-0 kubenswrapper[7776]: I0219 03:20:37.270385 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-tlhpc_92804daf-1fd0-4008-afff-4f9bc362990b/machine-approver-controller/0.log" Feb 19 03:20:37.271142 master-0 kubenswrapper[7776]: I0219 03:20:37.271080 7776 generic.go:334] "Generic (PLEG): container finished" podID="92804daf-1fd0-4008-afff-4f9bc362990b" containerID="75ea874391f33c0fa200e27a6fbad18b4a8573ebe40f901e494bc7cfe2905ed3" exitCode=255 Feb 19 03:20:37.271196 master-0 kubenswrapper[7776]: I0219 03:20:37.271159 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" event={"ID":"92804daf-1fd0-4008-afff-4f9bc362990b","Type":"ContainerDied","Data":"75ea874391f33c0fa200e27a6fbad18b4a8573ebe40f901e494bc7cfe2905ed3"} Feb 19 03:20:37.272098 master-0 kubenswrapper[7776]: I0219 03:20:37.272060 7776 scope.go:117] "RemoveContainer" containerID="75ea874391f33c0fa200e27a6fbad18b4a8573ebe40f901e494bc7cfe2905ed3" Feb 19 03:20:38.282934 master-0 kubenswrapper[7776]: I0219 03:20:38.282873 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:20:38.282934 master-0 kubenswrapper[7776]: I0219 03:20:38.282932 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="52f129c7009e6597cab7613e274a5e92bff18227b925d3ec2d217acbeb4c8d74" exitCode=0 Feb 19 03:20:38.283689 master-0 kubenswrapper[7776]: I0219 03:20:38.282982 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"52f129c7009e6597cab7613e274a5e92bff18227b925d3ec2d217acbeb4c8d74"} Feb 19 03:20:38.283912 master-0 kubenswrapper[7776]: I0219 03:20:38.283879 7776 scope.go:117] "RemoveContainer" containerID="52f129c7009e6597cab7613e274a5e92bff18227b925d3ec2d217acbeb4c8d74" Feb 19 03:20:38.285976 master-0 kubenswrapper[7776]: I0219 03:20:38.285953 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-tlhpc_92804daf-1fd0-4008-afff-4f9bc362990b/machine-approver-controller/0.log" Feb 19 03:20:38.286723 master-0 kubenswrapper[7776]: I0219 03:20:38.286522 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" event={"ID":"92804daf-1fd0-4008-afff-4f9bc362990b","Type":"ContainerStarted","Data":"a09eeb891e4595bc5a51961bb68274d285ace3800759eac2c775e69a488b00dd"} Feb 19 03:20:38.291272 master-0 kubenswrapper[7776]: I0219 03:20:38.291221 7776 generic.go:334] "Generic (PLEG): container finished" podID="15a571c6-7c47-4b57-bc5b-e46544a114c8" containerID="f288826ba3365168a27108ffc9be5733bebebaf28a3b66f0962898e5aed02b61" exitCode=0 Feb 19 03:20:38.291378 master-0 kubenswrapper[7776]: I0219 03:20:38.291272 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerDied","Data":"f288826ba3365168a27108ffc9be5733bebebaf28a3b66f0962898e5aed02b61"} Feb 19 03:20:38.291378 master-0 kubenswrapper[7776]: I0219 03:20:38.291300 7776 scope.go:117] "RemoveContainer" containerID="0f3766857d0863e0c7bf5650275239873c534f3ae3d01d3445961163b616988a" Feb 19 03:20:38.291753 master-0 kubenswrapper[7776]: I0219 03:20:38.291729 7776 scope.go:117] "RemoveContainer" containerID="f288826ba3365168a27108ffc9be5733bebebaf28a3b66f0962898e5aed02b61" Feb 19 03:20:38.735708 master-0 kubenswrapper[7776]: E0219 03:20:38.735606 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:39.304740 master-0 kubenswrapper[7776]: I0219 03:20:39.304627 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" event={"ID":"15a571c6-7c47-4b57-bc5b-e46544a114c8","Type":"ContainerStarted","Data":"5eee64f3af53cff7d15954f89d202ef1fb5df9aba3b834285a58a828e4976f0c"} Feb 19 03:20:39.309400 master-0 kubenswrapper[7776]: I0219 03:20:39.309350 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:20:39.309553 master-0 kubenswrapper[7776]: I0219 03:20:39.309439 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da"} Feb 19 03:20:43.688730 master-0 kubenswrapper[7776]: I0219 03:20:43.688591 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:20:43.688730 master-0 kubenswrapper[7776]: I0219 03:20:43.688692 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:20:46.365281 master-0 kubenswrapper[7776]: I0219 03:20:46.365177 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/4.log" Feb 19 03:20:46.366054 master-0 kubenswrapper[7776]: I0219 03:20:46.365874 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/3.log" Feb 19 03:20:46.366308 master-0 kubenswrapper[7776]: I0219 03:20:46.366204 7776 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" exitCode=1 Feb 19 03:20:46.366308 master-0 kubenswrapper[7776]: I0219 03:20:46.366277 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerDied","Data":"b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5"} Feb 19 03:20:46.366308 master-0 kubenswrapper[7776]: I0219 03:20:46.366317 7776 scope.go:117] "RemoveContainer" containerID="1f1abc6b28b9c5fc6a345c0dc375481a87aee8246eff359206608d83aec4c1c1" Feb 19 03:20:46.367016 master-0 kubenswrapper[7776]: I0219 03:20:46.366954 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:20:46.367329 master-0 kubenswrapper[7776]: E0219 03:20:46.367230 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:20:46.689237 master-0 kubenswrapper[7776]: I0219 03:20:46.689150 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:20:46.689562 master-0 kubenswrapper[7776]: I0219 03:20:46.689276 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:47.378062 master-0 kubenswrapper[7776]: I0219 03:20:47.377961 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/4.log" Feb 19 03:20:48.737068 master-0 kubenswrapper[7776]: E0219 03:20:48.736849 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:49.397993 master-0 kubenswrapper[7776]: I0219 03:20:49.397922 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 19 03:20:49.398653 master-0 kubenswrapper[7776]: I0219 03:20:49.398594 7776 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327" exitCode=1 Feb 19 03:20:49.398778 master-0 kubenswrapper[7776]: I0219 03:20:49.398709 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327"} Feb 19 03:20:49.399717 master-0 kubenswrapper[7776]: I0219 03:20:49.399653 7776 scope.go:117] "RemoveContainer" containerID="ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327" Feb 19 03:20:49.401611 master-0 kubenswrapper[7776]: I0219 03:20:49.401394 7776 generic.go:334] "Generic (PLEG): container finished" podID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" exitCode=0 Feb 19 03:20:49.401611 master-0 kubenswrapper[7776]: I0219 03:20:49.401449 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerDied","Data":"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668"} Feb 19 03:20:49.402128 master-0 kubenswrapper[7776]: I0219 03:20:49.402078 7776 scope.go:117] "RemoveContainer" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" Feb 19 03:20:50.410868 master-0 kubenswrapper[7776]: I0219 03:20:50.410787 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerStarted","Data":"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75"} Feb 19 03:20:50.411653 master-0 kubenswrapper[7776]: I0219 03:20:50.411148 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:20:50.414688 master-0 kubenswrapper[7776]: I0219 03:20:50.414638 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 19 03:20:50.414872 master-0 kubenswrapper[7776]: I0219 03:20:50.414833 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:20:50.415098 master-0 kubenswrapper[7776]: I0219 03:20:50.415052 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2d484b07e94495906a9ef1c8f980fb107c93c95a40a52c0019224db82b51fc4d"} Feb 19 03:20:50.415378 master-0 kubenswrapper[7776]: I0219 03:20:50.415328 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:20:51.064175 master-0 kubenswrapper[7776]: E0219 03:20:51.064068 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:20:53.727276 master-0 kubenswrapper[7776]: E0219 03:20:53.727118 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189587876030a215 openshift-kube-controller-manager 11915 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:50eac3d8c63234f2a49e98044c0d4f67,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:17:23 +0000 UTC,LastTimestamp:2026-02-19 03:18:19.204050079 +0000 UTC m=+805.543734597,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:20:56.461983 master-0 kubenswrapper[7776]: I0219 03:20:56.461889 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/3.log" Feb 19 03:20:56.462907 master-0 kubenswrapper[7776]: I0219 03:20:56.462599 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/2.log" Feb 19 03:20:56.462907 master-0 kubenswrapper[7776]: I0219 03:20:56.462647 7776 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" exitCode=1 Feb 19 03:20:56.462907 master-0 kubenswrapper[7776]: I0219 03:20:56.462682 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff"} Feb 19 03:20:56.462907 master-0 kubenswrapper[7776]: I0219 03:20:56.462723 7776 scope.go:117] "RemoveContainer" containerID="6f78f5411f8025c775c1717b601fd356801be5421b8cffa32ecda2678d51b4c5" Feb 19 03:20:56.463485 master-0 kubenswrapper[7776]: I0219 03:20:56.463434 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:20:56.463762 master-0 kubenswrapper[7776]: E0219 03:20:56.463724 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:20:56.689133 master-0 kubenswrapper[7776]: I0219 03:20:56.688996 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:20:56.689133 master-0 kubenswrapper[7776]: I0219 03:20:56.689119 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:57.472958 master-0 kubenswrapper[7776]: I0219 03:20:57.472902 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/3.log" Feb 19 03:20:58.738057 master-0 kubenswrapper[7776]: E0219 03:20:58.737980 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:20:58.738057 master-0 kubenswrapper[7776]: E0219 03:20:58.738030 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:20:59.843960 master-0 kubenswrapper[7776]: I0219 03:20:59.843857 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:20:59.845004 master-0 kubenswrapper[7776]: E0219 03:20:59.844439 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:21:01.509067 master-0 kubenswrapper[7776]: I0219 03:21:01.508984 7776 generic.go:334] "Generic (PLEG): container finished" podID="76470062-ab83-47ed-a669-deeb71996548" containerID="047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" exitCode=0 Feb 19 03:21:01.509067 master-0 kubenswrapper[7776]: I0219 03:21:01.509050 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerDied","Data":"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366"} Feb 19 03:21:01.509966 master-0 kubenswrapper[7776]: I0219 03:21:01.509135 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"882c525babc52c3119968e9793962f24892225613582692392aa79601c39660e"} Feb 19 03:21:01.509966 master-0 kubenswrapper[7776]: I0219 03:21:01.509169 7776 scope.go:117] "RemoveContainer" containerID="a9877e6164fd70e4cefb580b5faf9495b5d88f56b0eabc9be1b0d949563be3bd" Feb 19 03:21:01.761476 master-0 kubenswrapper[7776]: I0219 03:21:01.761341 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:21:01.765091 master-0 kubenswrapper[7776]: I0219 03:21:01.765029 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:01.765091 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:01.765091 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:01.765091 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:01.765342 master-0 kubenswrapper[7776]: I0219 03:21:01.765119 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:02.179274 master-0 kubenswrapper[7776]: E0219 03:21:02.179212 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:02.521724 master-0 kubenswrapper[7776]: I0219 03:21:02.521376 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:21:02.521724 master-0 kubenswrapper[7776]: I0219 03:21:02.521405 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:21:02.763289 master-0 kubenswrapper[7776]: I0219 03:21:02.763226 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:02.763289 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:02.763289 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:02.763289 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:02.763701 master-0 kubenswrapper[7776]: I0219 03:21:02.763668 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:03.762921 master-0 kubenswrapper[7776]: I0219 03:21:03.762853 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:03.762921 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:03.762921 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:03.762921 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:03.763493 master-0 kubenswrapper[7776]: I0219 03:21:03.762929 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:03.854221 master-0 kubenswrapper[7776]: I0219 03:21:03.854105 7776 status_manager.go:851] "Failed to get status for pod" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" pod="openshift-kube-apiserver/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 19 03:21:04.765208 master-0 kubenswrapper[7776]: I0219 03:21:04.765015 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:04.765208 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:04.765208 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:04.765208 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:04.765208 master-0 kubenswrapper[7776]: I0219 03:21:04.765147 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:05.764303 master-0 kubenswrapper[7776]: I0219 03:21:05.764218 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:05.764303 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:05.764303 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:05.764303 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:05.764811 master-0 kubenswrapper[7776]: I0219 03:21:05.764322 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:06.690075 master-0 kubenswrapper[7776]: I0219 03:21:06.689888 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:06.690891 master-0 kubenswrapper[7776]: I0219 03:21:06.690038 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:06.690891 master-0 kubenswrapper[7776]: I0219 03:21:06.690205 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:06.691684 master-0 kubenswrapper[7776]: I0219 03:21:06.691628 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 19 03:21:06.691895 master-0 kubenswrapper[7776]: I0219 03:21:06.691823 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" containerID="cri-o://c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da" gracePeriod=30 Feb 19 03:21:06.711316 master-0 kubenswrapper[7776]: E0219 03:21:06.711190 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:21:06.711316 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443" Netns:"/var/run/netns/fcccfe72-31b6-477b-96bf-f2941873c73e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:21:06.711316 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:21:06.711316 master-0 kubenswrapper[7776]: > Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: E0219 03:21:06.711334 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443" Netns:"/var/run/netns/fcccfe72-31b6-477b-96bf-f2941873c73e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: E0219 03:21:06.711371 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443" Netns:"/var/run/netns/fcccfe72-31b6-477b-96bf-f2941873c73e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:21:06.711637 master-0 kubenswrapper[7776]: E0219 03:21:06.711472 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443\\\" Netns:\\\"/var/run/netns/fcccfe72-31b6-477b-96bf-f2941873c73e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-3-master-0" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" Feb 19 03:21:06.764774 master-0 kubenswrapper[7776]: I0219 03:21:06.764694 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:06.764774 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:06.764774 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:06.764774 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:06.765112 master-0 kubenswrapper[7776]: I0219 03:21:06.764786 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:07.564391 master-0 kubenswrapper[7776]: I0219 03:21:07.564295 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/1.log" Feb 19 03:21:07.566095 master-0 kubenswrapper[7776]: I0219 03:21:07.566046 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:21:07.566243 master-0 kubenswrapper[7776]: I0219 03:21:07.566095 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da" exitCode=255 Feb 19 03:21:07.566243 master-0 kubenswrapper[7776]: I0219 03:21:07.566162 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:21:07.566243 master-0 kubenswrapper[7776]: I0219 03:21:07.566206 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da"} Feb 19 03:21:07.566563 master-0 kubenswrapper[7776]: I0219 03:21:07.566290 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6"} Feb 19 03:21:07.566674 master-0 kubenswrapper[7776]: I0219 03:21:07.566607 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:21:07.567812 master-0 kubenswrapper[7776]: I0219 03:21:07.567179 7776 scope.go:117] "RemoveContainer" containerID="52f129c7009e6597cab7613e274a5e92bff18227b925d3ec2d217acbeb4c8d74" Feb 19 03:21:07.761669 master-0 kubenswrapper[7776]: I0219 03:21:07.761599 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:21:07.764076 master-0 kubenswrapper[7776]: I0219 03:21:07.764034 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:07.764076 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:07.764076 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:07.764076 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:07.764245 master-0 kubenswrapper[7776]: I0219 03:21:07.764089 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:08.065601 master-0 kubenswrapper[7776]: E0219 03:21:08.065516 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:21:08.580468 master-0 kubenswrapper[7776]: I0219 03:21:08.580373 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/1.log" Feb 19 03:21:08.582800 master-0 kubenswrapper[7776]: I0219 03:21:08.582742 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:21:08.764242 master-0 kubenswrapper[7776]: I0219 03:21:08.764134 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:08.764242 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:08.764242 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:08.764242 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:08.765672 master-0 kubenswrapper[7776]: I0219 03:21:08.764297 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:09.764016 master-0 kubenswrapper[7776]: I0219 03:21:09.763724 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:09.764016 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:09.764016 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:09.764016 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:09.764016 master-0 kubenswrapper[7776]: I0219 03:21:09.763987 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:10.763996 master-0 kubenswrapper[7776]: I0219 03:21:10.763871 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:10.763996 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:10.763996 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:10.763996 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:10.765071 master-0 kubenswrapper[7776]: I0219 03:21:10.763991 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:11.763759 master-0 kubenswrapper[7776]: I0219 03:21:11.763674 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:11.763759 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:11.763759 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:11.763759 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:11.765198 master-0 kubenswrapper[7776]: I0219 03:21:11.764428 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:11.843220 master-0 kubenswrapper[7776]: I0219 03:21:11.843139 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:21:11.843629 master-0 kubenswrapper[7776]: E0219 03:21:11.843572 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:21:12.763881 master-0 kubenswrapper[7776]: I0219 03:21:12.763803 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:12.763881 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:12.763881 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:12.763881 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:12.764198 master-0 kubenswrapper[7776]: I0219 03:21:12.763907 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:13.689305 master-0 kubenswrapper[7776]: I0219 03:21:13.689207 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:13.689305 master-0 kubenswrapper[7776]: I0219 03:21:13.689313 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:13.763727 master-0 kubenswrapper[7776]: I0219 03:21:13.763644 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:13.763727 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:13.763727 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:13.763727 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:13.764156 master-0 kubenswrapper[7776]: I0219 03:21:13.763753 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:14.764890 master-0 kubenswrapper[7776]: I0219 03:21:14.764737 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:14.764890 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:14.764890 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:14.764890 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:14.766159 master-0 kubenswrapper[7776]: I0219 03:21:14.766108 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:14.843550 master-0 kubenswrapper[7776]: I0219 03:21:14.843464 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:21:14.843873 master-0 kubenswrapper[7776]: E0219 03:21:14.843786 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:21:15.764568 master-0 kubenswrapper[7776]: I0219 03:21:15.764521 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:15.764568 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:15.764568 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:15.764568 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:15.764970 master-0 kubenswrapper[7776]: I0219 03:21:15.764941 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:16.690239 master-0 kubenswrapper[7776]: I0219 03:21:16.690144 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:16.690536 master-0 kubenswrapper[7776]: I0219 03:21:16.690252 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:16.764170 master-0 kubenswrapper[7776]: I0219 03:21:16.764036 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:16.764170 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:16.764170 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:16.764170 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:16.764560 master-0 kubenswrapper[7776]: I0219 03:21:16.764212 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:17.764256 master-0 kubenswrapper[7776]: I0219 03:21:17.764180 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:17.764256 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:17.764256 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:17.764256 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:17.764256 master-0 kubenswrapper[7776]: I0219 03:21:17.764270 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:18.764051 master-0 kubenswrapper[7776]: I0219 03:21:18.763996 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:18.764051 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:18.764051 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:18.764051 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:18.764734 master-0 kubenswrapper[7776]: I0219 03:21:18.764079 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:18.891063 master-0 kubenswrapper[7776]: E0219 03:21:18.890652 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:21:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:21:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:21:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:21:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:19.763760 master-0 kubenswrapper[7776]: I0219 03:21:19.763697 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:19.763760 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:19.763760 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:19.763760 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:19.764041 master-0 kubenswrapper[7776]: I0219 03:21:19.763789 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:20.763805 master-0 kubenswrapper[7776]: I0219 03:21:20.763728 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:20.763805 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:20.763805 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:20.763805 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:20.764465 master-0 kubenswrapper[7776]: I0219 03:21:20.763836 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:21.765082 master-0 kubenswrapper[7776]: I0219 03:21:21.764995 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:21.765082 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:21.765082 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:21.765082 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:21.766074 master-0 kubenswrapper[7776]: I0219 03:21:21.765110 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:22.764408 master-0 kubenswrapper[7776]: I0219 03:21:22.764317 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:22.764408 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:22.764408 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:22.764408 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:22.764852 master-0 kubenswrapper[7776]: I0219 03:21:22.764428 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:23.763891 master-0 kubenswrapper[7776]: I0219 03:21:23.763808 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:23.763891 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:23.763891 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:23.763891 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:23.764553 master-0 kubenswrapper[7776]: I0219 03:21:23.763920 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:23.843141 master-0 kubenswrapper[7776]: I0219 03:21:23.843027 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:21:23.843642 master-0 kubenswrapper[7776]: E0219 03:21:23.843343 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:21:24.763895 master-0 kubenswrapper[7776]: I0219 03:21:24.763750 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:24.763895 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:24.763895 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:24.763895 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:24.763895 master-0 kubenswrapper[7776]: I0219 03:21:24.763851 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:25.066808 master-0 kubenswrapper[7776]: E0219 03:21:25.066305 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:21:25.764159 master-0 kubenswrapper[7776]: I0219 03:21:25.764010 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:25.764159 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:25.764159 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:25.764159 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:25.765316 master-0 kubenswrapper[7776]: I0219 03:21:25.764368 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:26.689688 master-0 kubenswrapper[7776]: I0219 03:21:26.689609 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:26.690197 master-0 kubenswrapper[7776]: I0219 03:21:26.690138 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:26.764697 master-0 kubenswrapper[7776]: I0219 03:21:26.764582 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:26.764697 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:26.764697 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:26.764697 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:26.765834 master-0 kubenswrapper[7776]: I0219 03:21:26.764734 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:27.730501 master-0 kubenswrapper[7776]: E0219 03:21:27.730287 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895878760b582f3 openshift-kube-controller-manager 11916 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:50eac3d8c63234f2a49e98044c0d4f67,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:17:23 +0000 UTC,LastTimestamp:2026-02-19 03:18:19.212957713 +0000 UTC m=+805.552642231,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:21:27.764875 master-0 kubenswrapper[7776]: I0219 03:21:27.764803 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:27.764875 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:27.764875 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:27.764875 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:27.765883 master-0 kubenswrapper[7776]: I0219 03:21:27.765844 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:27.843739 master-0 kubenswrapper[7776]: I0219 03:21:27.843664 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:21:27.844126 master-0 kubenswrapper[7776]: E0219 03:21:27.844060 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:21:28.764921 master-0 kubenswrapper[7776]: I0219 03:21:28.764829 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:28.764921 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:28.764921 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:28.764921 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:28.764921 master-0 kubenswrapper[7776]: I0219 03:21:28.764925 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:28.892428 master-0 kubenswrapper[7776]: E0219 03:21:28.892329 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:29.764696 master-0 kubenswrapper[7776]: I0219 03:21:29.764654 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:29.764696 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:29.764696 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:29.764696 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:29.765557 master-0 kubenswrapper[7776]: I0219 03:21:29.764711 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:30.764124 master-0 kubenswrapper[7776]: I0219 03:21:30.764054 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:30.764124 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:30.764124 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:30.764124 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:30.764124 master-0 kubenswrapper[7776]: I0219 03:21:30.764125 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:31.764023 master-0 kubenswrapper[7776]: I0219 03:21:31.763924 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:31.764023 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:31.764023 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:31.764023 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:31.764924 master-0 kubenswrapper[7776]: I0219 03:21:31.764039 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:32.764227 master-0 kubenswrapper[7776]: I0219 03:21:32.764099 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:32.764227 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:32.764227 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:32.764227 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:32.764227 master-0 kubenswrapper[7776]: I0219 03:21:32.764215 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:33.765339 master-0 kubenswrapper[7776]: I0219 03:21:33.765087 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:33.765339 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:33.765339 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:33.765339 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:33.765339 master-0 kubenswrapper[7776]: I0219 03:21:33.765312 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:33.791230 master-0 kubenswrapper[7776]: I0219 03:21:33.791174 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/1.log" Feb 19 03:21:33.792400 master-0 kubenswrapper[7776]: I0219 03:21:33.792362 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/0.log" Feb 19 03:21:33.792534 master-0 kubenswrapper[7776]: I0219 03:21:33.792411 7776 generic.go:334] "Generic (PLEG): container finished" podID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" containerID="675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a" exitCode=1 Feb 19 03:21:33.792534 master-0 kubenswrapper[7776]: I0219 03:21:33.792455 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerDied","Data":"675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a"} Feb 19 03:21:33.792534 master-0 kubenswrapper[7776]: I0219 03:21:33.792513 7776 scope.go:117] "RemoveContainer" containerID="c7efec73ecd5959e325f34dc1abcbd0a0ee696d09e18dbddaa6606e552d9257d" Feb 19 03:21:33.793407 master-0 kubenswrapper[7776]: I0219 03:21:33.793368 7776 scope.go:117] "RemoveContainer" containerID="675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a" Feb 19 03:21:33.793770 master-0 kubenswrapper[7776]: E0219 03:21:33.793722 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-9vgg7_openshift-machine-api(af5828ea-090f-4c8f-90e6-c4e405e69ec5)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" podUID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" Feb 19 03:21:34.764588 master-0 kubenswrapper[7776]: I0219 03:21:34.764404 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:34.764588 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:34.764588 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:34.764588 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:34.764588 master-0 kubenswrapper[7776]: I0219 03:21:34.764497 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:34.800800 master-0 kubenswrapper[7776]: I0219 03:21:34.800736 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/1.log" Feb 19 03:21:34.842754 master-0 kubenswrapper[7776]: I0219 03:21:34.842689 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:21:34.842980 master-0 kubenswrapper[7776]: E0219 03:21:34.842954 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:21:35.764332 master-0 kubenswrapper[7776]: I0219 03:21:35.764245 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:35.764332 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:35.764332 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:35.764332 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:35.764842 master-0 kubenswrapper[7776]: I0219 03:21:35.764356 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:36.523569 master-0 kubenswrapper[7776]: E0219 03:21:36.523466 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:36.689721 master-0 kubenswrapper[7776]: I0219 03:21:36.689621 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:36.689721 master-0 kubenswrapper[7776]: I0219 03:21:36.689715 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:36.690025 master-0 kubenswrapper[7776]: I0219 03:21:36.689776 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:36.690736 master-0 kubenswrapper[7776]: I0219 03:21:36.690695 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 19 03:21:36.690846 master-0 kubenswrapper[7776]: I0219 03:21:36.690784 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" containerID="cri-o://f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6" gracePeriod=30 Feb 19 03:21:36.770836 master-0 kubenswrapper[7776]: I0219 03:21:36.770715 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:36.770836 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:36.770836 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:36.770836 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:36.771221 master-0 kubenswrapper[7776]: I0219 03:21:36.770840 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:36.823838 master-0 kubenswrapper[7776]: I0219 03:21:36.823762 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/2.log" Feb 19 03:21:36.824691 master-0 kubenswrapper[7776]: I0219 03:21:36.824634 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/1.log" Feb 19 03:21:36.827384 master-0 kubenswrapper[7776]: I0219 03:21:36.827343 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:21:36.827519 master-0 kubenswrapper[7776]: I0219 03:21:36.827439 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6" exitCode=255 Feb 19 03:21:36.827519 master-0 kubenswrapper[7776]: I0219 03:21:36.827494 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6"} Feb 19 03:21:36.827613 master-0 kubenswrapper[7776]: I0219 03:21:36.827567 7776 scope.go:117] "RemoveContainer" containerID="c502bd434684e115c25e449379b45274b90007192ea4b6b1d2d7ae5fc1aa05da" Feb 19 03:21:37.764825 master-0 kubenswrapper[7776]: I0219 03:21:37.764730 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:37.764825 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:37.764825 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:37.764825 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:37.765866 master-0 kubenswrapper[7776]: I0219 03:21:37.764886 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:37.844719 master-0 kubenswrapper[7776]: I0219 03:21:37.844553 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/2.log" Feb 19 03:21:37.846642 master-0 kubenswrapper[7776]: I0219 03:21:37.846604 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:21:37.851014 master-0 kubenswrapper[7776]: I0219 03:21:37.850379 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} Feb 19 03:21:37.852678 master-0 kubenswrapper[7776]: I0219 03:21:37.852613 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"14775efdcd2d21cfa5380cda6110ff7f11195c8d583c1e8fdfc52bf29df9ae57"} Feb 19 03:21:37.852770 master-0 kubenswrapper[7776]: I0219 03:21:37.852685 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"16c3b004c40d76193f576d53169fed6e918160d971015a8fa3ff49332f28fdc1"} Feb 19 03:21:37.852770 master-0 kubenswrapper[7776]: I0219 03:21:37.852700 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"02a9fcc4ca7dc26983cfaa637ce8ae712974956ca9517abc25074ce302bff7b2"} Feb 19 03:21:37.852770 master-0 kubenswrapper[7776]: I0219 03:21:37.852714 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"f5e6e05e3e1d9ed0d5a9bb682a401139471a5c8f7de416f435b323b01ece0b32"} Feb 19 03:21:38.764084 master-0 kubenswrapper[7776]: I0219 03:21:38.763998 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:38.764084 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:38.764084 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:38.764084 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:38.764084 master-0 kubenswrapper[7776]: I0219 03:21:38.764082 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:38.843039 master-0 kubenswrapper[7776]: I0219 03:21:38.842958 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:21:38.843745 master-0 kubenswrapper[7776]: E0219 03:21:38.843237 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:21:38.865950 master-0 kubenswrapper[7776]: I0219 03:21:38.865821 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3b0cc332e01e2fa427a1ff2e5c32bb67b12dd22764844c8fc4d63e3826426814"} Feb 19 03:21:38.866547 master-0 kubenswrapper[7776]: I0219 03:21:38.866484 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:21:38.866547 master-0 kubenswrapper[7776]: I0219 03:21:38.866539 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:21:38.893272 master-0 kubenswrapper[7776]: E0219 03:21:38.893124 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:39.764119 master-0 kubenswrapper[7776]: I0219 03:21:39.764022 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:39.764119 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:39.764119 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:39.764119 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:39.764510 master-0 kubenswrapper[7776]: I0219 03:21:39.764136 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:40.763806 master-0 kubenswrapper[7776]: I0219 03:21:40.763683 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:40.763806 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:40.763806 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:40.763806 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:40.764556 master-0 kubenswrapper[7776]: I0219 03:21:40.763799 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:41.764139 master-0 kubenswrapper[7776]: I0219 03:21:41.764053 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:41.764139 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:41.764139 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:41.764139 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:41.764139 master-0 kubenswrapper[7776]: I0219 03:21:41.764148 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:42.068483 master-0 kubenswrapper[7776]: E0219 03:21:42.068241 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:21:42.764004 master-0 kubenswrapper[7776]: I0219 03:21:42.763875 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:42.764004 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:42.764004 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:42.764004 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:42.764840 master-0 kubenswrapper[7776]: I0219 03:21:42.764034 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:42.864760 master-0 kubenswrapper[7776]: I0219 03:21:42.864674 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:43.689545 master-0 kubenswrapper[7776]: I0219 03:21:43.689468 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:43.690107 master-0 kubenswrapper[7776]: I0219 03:21:43.690061 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:21:43.765479 master-0 kubenswrapper[7776]: I0219 03:21:43.765382 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:43.765479 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:43.765479 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:43.765479 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:43.766153 master-0 kubenswrapper[7776]: I0219 03:21:43.765498 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:44.763926 master-0 kubenswrapper[7776]: I0219 03:21:44.763772 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:44.763926 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:44.763926 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:44.763926 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:44.763926 master-0 kubenswrapper[7776]: I0219 03:21:44.763853 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:44.833212 master-0 kubenswrapper[7776]: I0219 03:21:44.833117 7776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:44.833212 master-0 kubenswrapper[7776]: I0219 03:21:44.833213 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:44.834501 master-0 kubenswrapper[7776]: I0219 03:21:44.833277 7776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:44.834501 master-0 kubenswrapper[7776]: I0219 03:21:44.833329 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:45.764042 master-0 kubenswrapper[7776]: I0219 03:21:45.763952 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:45.764042 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:45.764042 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:45.764042 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:45.764437 master-0 kubenswrapper[7776]: I0219 03:21:45.764056 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:45.843049 master-0 kubenswrapper[7776]: I0219 03:21:45.842957 7776 scope.go:117] "RemoveContainer" containerID="675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a" Feb 19 03:21:46.690149 master-0 kubenswrapper[7776]: I0219 03:21:46.690028 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:46.690149 master-0 kubenswrapper[7776]: I0219 03:21:46.690134 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:46.763692 master-0 kubenswrapper[7776]: I0219 03:21:46.763615 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:46.763692 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:46.763692 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:46.763692 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:46.763692 master-0 kubenswrapper[7776]: I0219 03:21:46.763680 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:46.937014 master-0 kubenswrapper[7776]: I0219 03:21:46.936908 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/1.log" Feb 19 03:21:46.938090 master-0 kubenswrapper[7776]: I0219 03:21:46.937781 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1"} Feb 19 03:21:47.765179 master-0 kubenswrapper[7776]: I0219 03:21:47.765065 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:47.765179 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:47.765179 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:47.765179 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:47.765618 master-0 kubenswrapper[7776]: I0219 03:21:47.765211 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:47.842626 master-0 kubenswrapper[7776]: I0219 03:21:47.842555 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:21:47.864820 master-0 kubenswrapper[7776]: I0219 03:21:47.864751 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:47.898344 master-0 kubenswrapper[7776]: I0219 03:21:47.898284 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:48.763759 master-0 kubenswrapper[7776]: I0219 03:21:48.763659 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:48.763759 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:48.763759 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:48.763759 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:48.764608 master-0 kubenswrapper[7776]: I0219 03:21:48.763793 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:48.893897 master-0 kubenswrapper[7776]: E0219 03:21:48.893807 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:48.961955 master-0 kubenswrapper[7776]: I0219 03:21:48.961853 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/3.log" Feb 19 03:21:48.961955 master-0 kubenswrapper[7776]: I0219 03:21:48.961954 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2"} Feb 19 03:21:49.765663 master-0 kubenswrapper[7776]: I0219 03:21:49.765536 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:49.765663 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:49.765663 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:49.765663 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:49.766725 master-0 kubenswrapper[7776]: I0219 03:21:49.765698 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:50.764418 master-0 kubenswrapper[7776]: I0219 03:21:50.764364 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:50.764418 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:50.764418 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:50.764418 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:50.764876 master-0 kubenswrapper[7776]: I0219 03:21:50.764438 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:50.843273 master-0 kubenswrapper[7776]: I0219 03:21:50.843173 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:21:50.843758 master-0 kubenswrapper[7776]: E0219 03:21:50.843552 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:21:51.763835 master-0 kubenswrapper[7776]: I0219 03:21:51.763710 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:51.763835 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:51.763835 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:51.763835 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:51.764316 master-0 kubenswrapper[7776]: I0219 03:21:51.763860 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:52.764900 master-0 kubenswrapper[7776]: I0219 03:21:52.764787 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:52.764900 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:52.764900 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:52.764900 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:52.765618 master-0 kubenswrapper[7776]: I0219 03:21:52.764957 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:52.891038 master-0 kubenswrapper[7776]: I0219 03:21:52.890961 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 19 03:21:53.764274 master-0 kubenswrapper[7776]: I0219 03:21:53.764170 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:53.764274 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:53.764274 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:53.764274 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:53.764715 master-0 kubenswrapper[7776]: I0219 03:21:53.764303 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:53.856932 master-0 kubenswrapper[7776]: I0219 03:21:53.856870 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:21:54.764547 master-0 kubenswrapper[7776]: I0219 03:21:54.764365 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:54.764547 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:54.764547 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:54.764547 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:54.764547 master-0 kubenswrapper[7776]: I0219 03:21:54.764469 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:55.764231 master-0 kubenswrapper[7776]: I0219 03:21:55.764169 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:55.764231 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:55.764231 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:55.764231 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:55.765234 master-0 kubenswrapper[7776]: I0219 03:21:55.765192 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:56.689790 master-0 kubenswrapper[7776]: I0219 03:21:56.689687 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:21:56.690291 master-0 kubenswrapper[7776]: I0219 03:21:56.689828 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:56.764499 master-0 kubenswrapper[7776]: I0219 03:21:56.764424 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:56.764499 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:56.764499 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:56.764499 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:56.765461 master-0 kubenswrapper[7776]: I0219 03:21:56.764518 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:57.765040 master-0 kubenswrapper[7776]: I0219 03:21:57.764949 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:57.765040 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:57.765040 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:57.765040 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:57.765823 master-0 kubenswrapper[7776]: I0219 03:21:57.765078 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:58.765671 master-0 kubenswrapper[7776]: I0219 03:21:58.765582 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:58.765671 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:58.765671 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:58.765671 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:58.766919 master-0 kubenswrapper[7776]: I0219 03:21:58.765695 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:21:58.894715 master-0 kubenswrapper[7776]: E0219 03:21:58.894647 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:21:58.894715 master-0 kubenswrapper[7776]: E0219 03:21:58.894707 7776 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:21:59.070063 master-0 kubenswrapper[7776]: E0219 03:21:59.069803 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:21:59.764655 master-0 kubenswrapper[7776]: I0219 03:21:59.764580 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:21:59.764655 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:21:59.764655 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:21:59.764655 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:21:59.765159 master-0 kubenswrapper[7776]: I0219 03:21:59.764659 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:00.763338 master-0 kubenswrapper[7776]: I0219 03:22:00.763292 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:00.763338 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:00.763338 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:00.763338 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:00.764053 master-0 kubenswrapper[7776]: I0219 03:22:00.764026 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:01.736894 master-0 kubenswrapper[7776]: E0219 03:22:01.736744 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-6569778c84-qcd49.1895874ad965c6f0 openshift-ingress-operator 12429 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-6569778c84-qcd49,UID:9ff96ce8-6427-4a42-afa6-8b8bc778f094,APIVersion:v1,ResourceVersion:3479,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:13:03 +0000 UTC,LastTimestamp:2026-02-19 03:18:31.84314467 +0000 UTC m=+818.182829218,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:22:01.764166 master-0 kubenswrapper[7776]: I0219 03:22:01.764095 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:01.764166 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:01.764166 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:01.764166 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:01.764852 master-0 kubenswrapper[7776]: I0219 03:22:01.764169 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:02.764061 master-0 kubenswrapper[7776]: I0219 03:22:02.763935 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:02.764061 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:02.764061 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:02.764061 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:02.764622 master-0 kubenswrapper[7776]: I0219 03:22:02.764145 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:03.764414 master-0 kubenswrapper[7776]: I0219 03:22:03.764315 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:03.764414 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:03.764414 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:03.764414 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:03.765133 master-0 kubenswrapper[7776]: I0219 03:22:03.764420 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:03.856551 master-0 kubenswrapper[7776]: I0219 03:22:03.856460 7776 status_manager.go:851] "Failed to get status for pod" podUID="a52be87c-e707-4269-96da-537708d52b64" pod="openshift-network-node-identity/network-node-identity-rm5jg" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-rm5jg)" Feb 19 03:22:04.764939 master-0 kubenswrapper[7776]: I0219 03:22:04.764767 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:04.764939 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:04.764939 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:04.764939 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:04.764939 master-0 kubenswrapper[7776]: I0219 03:22:04.764876 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:05.764915 master-0 kubenswrapper[7776]: I0219 03:22:05.764788 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:05.764915 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:05.764915 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:05.764915 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:05.764915 master-0 kubenswrapper[7776]: I0219 03:22:05.764875 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:05.844188 master-0 kubenswrapper[7776]: I0219 03:22:05.844089 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:22:05.844578 master-0 kubenswrapper[7776]: E0219 03:22:05.844473 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" podUID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" Feb 19 03:22:06.689641 master-0 kubenswrapper[7776]: I0219 03:22:06.689507 7776 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:22:06.689641 master-0 kubenswrapper[7776]: I0219 03:22:06.689620 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:22:06.690084 master-0 kubenswrapper[7776]: I0219 03:22:06.689686 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:22:06.690718 master-0 kubenswrapper[7776]: I0219 03:22:06.690650 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 19 03:22:06.690862 master-0 kubenswrapper[7776]: I0219 03:22:06.690810 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" containerID="cri-o://34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" gracePeriod=30 Feb 19 03:22:06.764399 master-0 kubenswrapper[7776]: I0219 03:22:06.764300 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:06.764399 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:06.764399 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:06.764399 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:06.764876 master-0 kubenswrapper[7776]: I0219 03:22:06.764424 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:06.812136 master-0 kubenswrapper[7776]: E0219 03:22:06.812067 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:07.132079 master-0 kubenswrapper[7776]: I0219 03:22:07.132004 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:22:07.132846 master-0 kubenswrapper[7776]: I0219 03:22:07.132793 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/2.log" Feb 19 03:22:07.135692 master-0 kubenswrapper[7776]: I0219 03:22:07.135616 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:22:07.135867 master-0 kubenswrapper[7776]: I0219 03:22:07.135715 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" exitCode=255 Feb 19 03:22:07.135867 master-0 kubenswrapper[7776]: I0219 03:22:07.135768 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} Feb 19 03:22:07.135867 master-0 kubenswrapper[7776]: I0219 03:22:07.135830 7776 scope.go:117] "RemoveContainer" containerID="f67912b31af7c897b035ef26f9512d1595c41efef43b76402ad20d563149cdd6" Feb 19 03:22:07.137287 master-0 kubenswrapper[7776]: I0219 03:22:07.137193 7776 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:22:07.137836 master-0 kubenswrapper[7776]: E0219 03:22:07.137748 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:07.764575 master-0 kubenswrapper[7776]: I0219 03:22:07.764465 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:07.764575 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:07.764575 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:07.764575 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:07.765044 master-0 kubenswrapper[7776]: I0219 03:22:07.764601 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:08.145372 master-0 kubenswrapper[7776]: I0219 03:22:08.145246 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:22:08.147808 master-0 kubenswrapper[7776]: I0219 03:22:08.147396 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: E0219 03:22:08.320961 7776 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f" Netns:"/var/run/netns/3c57aed0-deed-4227-b1de-2589fb5c0eeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: > Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: E0219 03:22:08.321045 7776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f" Netns:"/var/run/netns/3c57aed0-deed-4227-b1de-2589fb5c0eeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: E0219 03:22:08.321073 7776 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f" Netns:"/var/run/netns/3c57aed0-deed-4227-b1de-2589fb5c0eeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 03:22:08.321161 master-0 kubenswrapper[7776]: > pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:22:08.321967 master-0 kubenswrapper[7776]: E0219 03:22:08.321160 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-3-master-0_openshift-kube-apiserver(3fab5bbd-672c-4e18-9c1e-438e2360bc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f\\\" Netns:\\\"/var/run/netns/3c57aed0-deed-4227-b1de-2589fb5c0eeb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-3-master-0" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" Feb 19 03:22:08.763944 master-0 kubenswrapper[7776]: I0219 03:22:08.763874 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:08.763944 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:08.763944 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:08.763944 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:08.764206 master-0 kubenswrapper[7776]: I0219 03:22:08.763983 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:09.154733 master-0 kubenswrapper[7776]: I0219 03:22:09.154644 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:22:09.155485 master-0 kubenswrapper[7776]: I0219 03:22:09.155309 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:22:09.764849 master-0 kubenswrapper[7776]: I0219 03:22:09.764743 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:09.764849 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:09.764849 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:09.764849 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:09.765510 master-0 kubenswrapper[7776]: I0219 03:22:09.764851 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:10.764317 master-0 kubenswrapper[7776]: I0219 03:22:10.764157 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:10.764317 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:10.764317 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:10.764317 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:10.765395 master-0 kubenswrapper[7776]: I0219 03:22:10.764364 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:11.764369 master-0 kubenswrapper[7776]: I0219 03:22:11.764286 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:11.764369 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:11.764369 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:11.764369 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:11.765341 master-0 kubenswrapper[7776]: I0219 03:22:11.764381 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:12.764876 master-0 kubenswrapper[7776]: I0219 03:22:12.764791 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:12.764876 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:12.764876 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:12.764876 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:12.765941 master-0 kubenswrapper[7776]: I0219 03:22:12.764911 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:12.869071 master-0 kubenswrapper[7776]: E0219 03:22:12.869004 7776 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 19 03:22:13.198502 master-0 kubenswrapper[7776]: I0219 03:22:13.198446 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:22:13.198502 master-0 kubenswrapper[7776]: I0219 03:22:13.198478 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:22:13.688406 master-0 kubenswrapper[7776]: I0219 03:22:13.688341 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:22:13.689012 master-0 kubenswrapper[7776]: I0219 03:22:13.688858 7776 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:22:13.689123 master-0 kubenswrapper[7776]: E0219 03:22:13.689077 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:13.765249 master-0 kubenswrapper[7776]: I0219 03:22:13.765171 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:13.765249 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:13.765249 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:13.765249 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:13.766219 master-0 kubenswrapper[7776]: I0219 03:22:13.765303 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:14.764939 master-0 kubenswrapper[7776]: I0219 03:22:14.764729 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:14.764939 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:14.764939 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:14.764939 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:14.764939 master-0 kubenswrapper[7776]: I0219 03:22:14.764855 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:15.764164 master-0 kubenswrapper[7776]: I0219 03:22:15.764104 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:15.764164 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:15.764164 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:15.764164 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:15.764164 master-0 kubenswrapper[7776]: I0219 03:22:15.764169 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:16.071812 master-0 kubenswrapper[7776]: E0219 03:22:16.071658 7776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 19 03:22:16.764466 master-0 kubenswrapper[7776]: I0219 03:22:16.764360 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:16.764466 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:16.764466 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:16.764466 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:16.764466 master-0 kubenswrapper[7776]: I0219 03:22:16.764443 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:17.764750 master-0 kubenswrapper[7776]: I0219 03:22:17.764680 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:17.764750 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:17.764750 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:17.764750 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:17.764750 master-0 kubenswrapper[7776]: I0219 03:22:17.764744 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:17.843524 master-0 kubenswrapper[7776]: I0219 03:22:17.843401 7776 scope.go:117] "RemoveContainer" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" Feb 19 03:22:18.257550 master-0 kubenswrapper[7776]: I0219 03:22:18.257478 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/4.log" Feb 19 03:22:18.258531 master-0 kubenswrapper[7776]: I0219 03:22:18.258471 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/3.log" Feb 19 03:22:18.258637 master-0 kubenswrapper[7776]: I0219 03:22:18.258552 7776 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" exitCode=1 Feb 19 03:22:18.258711 master-0 kubenswrapper[7776]: I0219 03:22:18.258626 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2"} Feb 19 03:22:18.258798 master-0 kubenswrapper[7776]: I0219 03:22:18.258714 7776 scope.go:117] "RemoveContainer" containerID="954e89fd2a1c4166cbbe15a61374262fcd3983766230bd99b3ec85e7e56ecaff" Feb 19 03:22:18.259705 master-0 kubenswrapper[7776]: I0219 03:22:18.259658 7776 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:22:18.260075 master-0 kubenswrapper[7776]: E0219 03:22:18.260021 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:22:18.261212 master-0 kubenswrapper[7776]: I0219 03:22:18.261173 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/4.log" Feb 19 03:22:18.266659 master-0 kubenswrapper[7776]: I0219 03:22:18.266610 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" event={"ID":"9ff96ce8-6427-4a42-afa6-8b8bc778f094","Type":"ContainerStarted","Data":"202f2a55e182ade47046fee46a05704dddbce9dad7af7ec2fd12bcd73d8fa6a7"} Feb 19 03:22:18.763136 master-0 kubenswrapper[7776]: I0219 03:22:18.763064 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:18.763136 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:18.763136 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:18.763136 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:18.763136 master-0 kubenswrapper[7776]: I0219 03:22:18.763127 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:18.991833 master-0 kubenswrapper[7776]: E0219 03:22:18.991417 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:22:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:22:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:22:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:22:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:01d70013efcb6bd53533de62b00867982cc8cfd7ea2bcc920f1a89ec9a1e0a93\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d25e25fd688987cf457312a70060e31c5091a30a7d4b691cf7e566c69fa51f4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234172623},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2f02611c935b387581e1c3be693869fdf266797ea7c5bcb704c0b6e7d0a6f12f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f92684229a0699b57eaf06ea192bcde396a4e401a7bf7726499b7edac566dac8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210130107},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:22:19.273046 master-0 kubenswrapper[7776]: I0219 03:22:19.272990 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/4.log" Feb 19 03:22:19.764182 master-0 kubenswrapper[7776]: I0219 03:22:19.764109 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:19.764182 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:19.764182 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:19.764182 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:19.764577 master-0 kubenswrapper[7776]: I0219 03:22:19.764187 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:20.763942 master-0 kubenswrapper[7776]: I0219 03:22:20.763871 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:20.763942 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:20.763942 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:20.763942 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:20.763942 master-0 kubenswrapper[7776]: I0219 03:22:20.763936 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:21.764466 master-0 kubenswrapper[7776]: I0219 03:22:21.764338 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:21.764466 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:21.764466 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:21.764466 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:21.765485 master-0 kubenswrapper[7776]: I0219 03:22:21.764471 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:22.764684 master-0 kubenswrapper[7776]: I0219 03:22:22.764602 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:22.764684 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:22.764684 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:22.764684 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:22.765373 master-0 kubenswrapper[7776]: I0219 03:22:22.764709 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:23.764122 master-0 kubenswrapper[7776]: I0219 03:22:23.763966 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:23.764122 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:23.764122 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:23.764122 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:23.764122 master-0 kubenswrapper[7776]: I0219 03:22:23.764082 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:23.842752 master-0 kubenswrapper[7776]: I0219 03:22:23.842697 7776 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:22:23.843149 master-0 kubenswrapper[7776]: E0219 03:22:23.842964 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:23.893438 master-0 kubenswrapper[7776]: I0219 03:22:23.893360 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:23.893544 master-0 kubenswrapper[7776]: I0219 03:22:23.893462 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:23.921228 master-0 kubenswrapper[7776]: I0219 03:22:23.921055 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:23.921228 master-0 kubenswrapper[7776]: I0219 03:22:23.921157 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:24.317104 master-0 kubenswrapper[7776]: I0219 03:22:24.316928 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/3.log" Feb 19 03:22:24.317104 master-0 kubenswrapper[7776]: I0219 03:22:24.316981 7776 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53" exitCode=0 Feb 19 03:22:24.317104 master-0 kubenswrapper[7776]: I0219 03:22:24.317037 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53"} Feb 19 03:22:24.317104 master-0 kubenswrapper[7776]: I0219 03:22:24.317071 7776 scope.go:117] "RemoveContainer" containerID="b74e1ef658deba9054cacd4e4b2f892ff9bc29e9e78ce49be09ab91b8d5e8936" Feb 19 03:22:24.318150 master-0 kubenswrapper[7776]: I0219 03:22:24.318077 7776 scope.go:117] "RemoveContainer" containerID="c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53" Feb 19 03:22:24.320137 master-0 kubenswrapper[7776]: I0219 03:22:24.319650 7776 generic.go:334] "Generic (PLEG): container finished" podID="a59746bb-7d76-4fd7-8323-5b92be63afb9" containerID="075c2f17f8c40de4ef5a43e9679ffb1112b88d0d2cd16e8c3a34569ded3b80e6" exitCode=0 Feb 19 03:22:24.320137 master-0 kubenswrapper[7776]: I0219 03:22:24.319874 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerDied","Data":"075c2f17f8c40de4ef5a43e9679ffb1112b88d0d2cd16e8c3a34569ded3b80e6"} Feb 19 03:22:24.320556 master-0 kubenswrapper[7776]: I0219 03:22:24.320502 7776 scope.go:117] "RemoveContainer" containerID="075c2f17f8c40de4ef5a43e9679ffb1112b88d0d2cd16e8c3a34569ded3b80e6" Feb 19 03:22:24.324786 master-0 kubenswrapper[7776]: I0219 03:22:24.324736 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/2.log" Feb 19 03:22:24.325550 master-0 kubenswrapper[7776]: I0219 03:22:24.325183 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="2a9ccd68c71b55517a0af025c793f340d5a13b8dd01aa9526d809fbaf1a82b89" exitCode=0 Feb 19 03:22:24.325550 master-0 kubenswrapper[7776]: I0219 03:22:24.325236 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"2a9ccd68c71b55517a0af025c793f340d5a13b8dd01aa9526d809fbaf1a82b89"} Feb 19 03:22:24.325868 master-0 kubenswrapper[7776]: I0219 03:22:24.325821 7776 scope.go:117] "RemoveContainer" containerID="2a9ccd68c71b55517a0af025c793f340d5a13b8dd01aa9526d809fbaf1a82b89" Feb 19 03:22:24.329458 master-0 kubenswrapper[7776]: I0219 03:22:24.329408 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/2.log" Feb 19 03:22:24.329458 master-0 kubenswrapper[7776]: I0219 03:22:24.329452 7776 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431" exitCode=0 Feb 19 03:22:24.329783 master-0 kubenswrapper[7776]: I0219 03:22:24.329508 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerDied","Data":"6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431"} Feb 19 03:22:24.329902 master-0 kubenswrapper[7776]: I0219 03:22:24.329827 7776 scope.go:117] "RemoveContainer" containerID="6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431" Feb 19 03:22:24.332387 master-0 kubenswrapper[7776]: I0219 03:22:24.332282 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/2.log" Feb 19 03:22:24.332387 master-0 kubenswrapper[7776]: I0219 03:22:24.332316 7776 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130" exitCode=0 Feb 19 03:22:24.332387 master-0 kubenswrapper[7776]: I0219 03:22:24.332359 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerDied","Data":"49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130"} Feb 19 03:22:24.332650 master-0 kubenswrapper[7776]: I0219 03:22:24.332634 7776 scope.go:117] "RemoveContainer" containerID="49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130" Feb 19 03:22:24.335221 master-0 kubenswrapper[7776]: I0219 03:22:24.335164 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:22:24.335787 master-0 kubenswrapper[7776]: I0219 03:22:24.335742 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/0.log" Feb 19 03:22:24.336161 master-0 kubenswrapper[7776]: I0219 03:22:24.336120 7776 generic.go:334] "Generic (PLEG): container finished" podID="98ac5423-b231-44e5-9545-424d635ed6ee" containerID="4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1" exitCode=1 Feb 19 03:22:24.336493 master-0 kubenswrapper[7776]: I0219 03:22:24.336175 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerDied","Data":"4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1"} Feb 19 03:22:24.336749 master-0 kubenswrapper[7776]: I0219 03:22:24.336686 7776 scope.go:117] "RemoveContainer" containerID="4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1" Feb 19 03:22:24.339226 master-0 kubenswrapper[7776]: I0219 03:22:24.339067 7776 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e" exitCode=0 Feb 19 03:22:24.339226 master-0 kubenswrapper[7776]: I0219 03:22:24.339106 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerDied","Data":"84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e"} Feb 19 03:22:24.340713 master-0 kubenswrapper[7776]: I0219 03:22:24.340673 7776 scope.go:117] "RemoveContainer" containerID="84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e" Feb 19 03:22:24.352803 master-0 kubenswrapper[7776]: I0219 03:22:24.352747 7776 scope.go:117] "RemoveContainer" containerID="757e9a0ca78b5c9be8e7d397d2406ec6f854bb73586e71bec0887198a2e450f2" Feb 19 03:22:24.462769 master-0 kubenswrapper[7776]: I0219 03:22:24.462732 7776 scope.go:117] "RemoveContainer" containerID="4dafdbf16e4e12628e1dc265ab0c8607f980c06cb5f19358b6fbca76bb67b579" Feb 19 03:22:24.500311 master-0 kubenswrapper[7776]: I0219 03:22:24.500244 7776 scope.go:117] "RemoveContainer" containerID="40f21b66295146208ac6883b550126dd464dc59801ea5eec8001be9ddf550599" Feb 19 03:22:24.581436 master-0 kubenswrapper[7776]: I0219 03:22:24.581404 7776 scope.go:117] "RemoveContainer" containerID="61d4c7db9949cabb346e6b5c6f267c3cd30095b418d6916ce487053c09f5bbd9" Feb 19 03:22:24.623715 master-0 kubenswrapper[7776]: I0219 03:22:24.623680 7776 scope.go:117] "RemoveContainer" containerID="fe4faf0d4ffb2ebe11ee7bb3c950e62a3098091a94099dff9022e530a80d494a" Feb 19 03:22:24.711730 master-0 kubenswrapper[7776]: I0219 03:22:24.711686 7776 scope.go:117] "RemoveContainer" containerID="2cdc1a180a1258ac65d49719e5369984499472e93cb72520a18ffeecda800795" Feb 19 03:22:24.763925 master-0 kubenswrapper[7776]: I0219 03:22:24.763873 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:24.763925 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:24.763925 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:24.763925 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:24.765106 master-0 kubenswrapper[7776]: I0219 03:22:24.763933 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:25.353708 master-0 kubenswrapper[7776]: I0219 03:22:25.353605 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"19a1f28fd6894887f54799dd664b3153aee457ecc2c8aab80e319ccb1bdbf8a2"} Feb 19 03:22:25.356212 master-0 kubenswrapper[7776]: I0219 03:22:25.356154 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"987763106eeabe88cbdd191d01e6f39059ee96a02ef736bbdbea66f4d5635935"} Feb 19 03:22:25.358917 master-0 kubenswrapper[7776]: I0219 03:22:25.358865 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:22:25.359682 master-0 kubenswrapper[7776]: I0219 03:22:25.359606 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"d535f4c1585c1d5454f99de091b8d7476f2719a79c5de2bb6c941b4ff5a83bb5"} Feb 19 03:22:25.360006 master-0 kubenswrapper[7776]: I0219 03:22:25.359969 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:22:25.363844 master-0 kubenswrapper[7776]: I0219 03:22:25.363719 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"e103e135bf82f2eb93c3dbb2b40a81ffeb2314273026f2e9a0c0e8f111555646"} Feb 19 03:22:25.367184 master-0 kubenswrapper[7776]: I0219 03:22:25.367090 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"028495f0aee3ee18d27a6df8f41026b434ac3c3d335cf96c6e2e88bafe3758a1"} Feb 19 03:22:25.370374 master-0 kubenswrapper[7776]: I0219 03:22:25.370217 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" event={"ID":"a59746bb-7d76-4fd7-8323-5b92be63afb9","Type":"ContainerStarted","Data":"bac01ad63170ccced5e1cd17977ebb2125348e9dd5717a4826f770844d02fc8c"} Feb 19 03:22:25.374282 master-0 kubenswrapper[7776]: I0219 03:22:25.374170 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b"} Feb 19 03:22:25.374709 master-0 kubenswrapper[7776]: I0219 03:22:25.374632 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:22:25.764039 master-0 kubenswrapper[7776]: I0219 03:22:25.763974 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:25.764039 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:25.764039 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:25.764039 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:25.764039 master-0 kubenswrapper[7776]: I0219 03:22:25.764040 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:26.763678 master-0 kubenswrapper[7776]: I0219 03:22:26.763572 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:26.763678 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:26.763678 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:26.763678 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:26.763678 master-0 kubenswrapper[7776]: I0219 03:22:26.763648 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:27.764979 master-0 kubenswrapper[7776]: I0219 03:22:27.764859 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:27.764979 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:27.764979 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:27.764979 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:27.764979 master-0 kubenswrapper[7776]: I0219 03:22:27.764960 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:28.764323 master-0 kubenswrapper[7776]: I0219 03:22:28.764218 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:28.764323 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:28.764323 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:28.764323 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:28.764323 master-0 kubenswrapper[7776]: I0219 03:22:28.764319 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:28.992975 master-0 kubenswrapper[7776]: E0219 03:22:28.992864 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:22:29.764738 master-0 kubenswrapper[7776]: I0219 03:22:29.764622 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:29.764738 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:29.764738 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:29.764738 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:29.764738 master-0 kubenswrapper[7776]: I0219 03:22:29.764718 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:29.893049 master-0 kubenswrapper[7776]: I0219 03:22:29.892938 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:29.893379 master-0 kubenswrapper[7776]: I0219 03:22:29.893067 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:29.920972 master-0 kubenswrapper[7776]: I0219 03:22:29.920880 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:29.920972 master-0 kubenswrapper[7776]: I0219 03:22:29.920955 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:30.764448 master-0 kubenswrapper[7776]: I0219 03:22:30.764330 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:30.764448 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:30.764448 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:30.764448 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:30.765219 master-0 kubenswrapper[7776]: I0219 03:22:30.764461 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:31.764232 master-0 kubenswrapper[7776]: I0219 03:22:31.764145 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:31.764232 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:31.764232 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:31.764232 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:31.764813 master-0 kubenswrapper[7776]: I0219 03:22:31.764299 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:32.763899 master-0 kubenswrapper[7776]: I0219 03:22:32.763765 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:32.763899 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:32.763899 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:32.763899 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:32.763899 master-0 kubenswrapper[7776]: I0219 03:22:32.763885 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:32.843226 master-0 kubenswrapper[7776]: I0219 03:22:32.843129 7776 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:22:32.843601 master-0 kubenswrapper[7776]: E0219 03:22:32.843559 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:22:32.893688 master-0 kubenswrapper[7776]: I0219 03:22:32.893549 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:32.893688 master-0 kubenswrapper[7776]: I0219 03:22:32.893656 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:32.894111 master-0 kubenswrapper[7776]: I0219 03:22:32.893710 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:22:32.894690 master-0 kubenswrapper[7776]: I0219 03:22:32.894619 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:22:32.894690 master-0 kubenswrapper[7776]: I0219 03:22:32.894647 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:32.894690 master-0 kubenswrapper[7776]: I0219 03:22:32.894677 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b" gracePeriod=30 Feb 19 03:22:32.895065 master-0 kubenswrapper[7776]: I0219 03:22:32.894710 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:33.222308 master-0 kubenswrapper[7776]: I0219 03:22:33.221990 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:57606->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:22:33.222308 master-0 kubenswrapper[7776]: I0219 03:22:33.222131 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:57606->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:22:33.329749 master-0 kubenswrapper[7776]: E0219 03:22:33.329655 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:22:33.443657 master-0 kubenswrapper[7776]: I0219 03:22:33.443576 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/4.log" Feb 19 03:22:33.445191 master-0 kubenswrapper[7776]: I0219 03:22:33.445129 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b" exitCode=255 Feb 19 03:22:33.445354 master-0 kubenswrapper[7776]: I0219 03:22:33.445214 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b"} Feb 19 03:22:33.445354 master-0 kubenswrapper[7776]: I0219 03:22:33.445325 7776 scope.go:117] "RemoveContainer" containerID="2a9ccd68c71b55517a0af025c793f340d5a13b8dd01aa9526d809fbaf1a82b89" Feb 19 03:22:33.446323 master-0 kubenswrapper[7776]: I0219 03:22:33.446239 7776 scope.go:117] "RemoveContainer" containerID="ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b" Feb 19 03:22:33.446765 master-0 kubenswrapper[7776]: E0219 03:22:33.446702 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:22:33.763911 master-0 kubenswrapper[7776]: I0219 03:22:33.763757 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:33.763911 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:33.763911 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:33.763911 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:33.763911 master-0 kubenswrapper[7776]: I0219 03:22:33.763861 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:34.456427 master-0 kubenswrapper[7776]: I0219 03:22:34.456328 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/4.log" Feb 19 03:22:34.763270 master-0 kubenswrapper[7776]: I0219 03:22:34.763164 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:34.763270 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:34.763270 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:34.763270 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:34.763270 master-0 kubenswrapper[7776]: I0219 03:22:34.763222 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:35.297146 master-0 kubenswrapper[7776]: I0219 03:22:35.296995 7776 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-r7r6p container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 19 03:22:35.297146 master-0 kubenswrapper[7776]: I0219 03:22:35.297091 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 19 03:22:35.741682 master-0 kubenswrapper[7776]: E0219 03:22:35.741527 7776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-6569778c84-qcd49.1895871e22fa1599 openshift-ingress-operator 11011 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-6569778c84-qcd49,UID:9ff96ce8-6427-4a42-afa6-8b8bc778f094,APIVersion:v1,ResourceVersion:3479,FieldPath:spec.containers{ingress-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:09:51 +0000 UTC,LastTimestamp:2026-02-19 03:18:45.844995801 +0000 UTC m=+832.184680319,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:22:35.763901 master-0 kubenswrapper[7776]: I0219 03:22:35.763814 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:35.763901 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:35.763901 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:35.763901 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:35.763901 master-0 kubenswrapper[7776]: I0219 03:22:35.763895 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:36.225899 master-0 kubenswrapper[7776]: I0219 03:22:36.225823 7776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 19 03:22:36.235943 master-0 kubenswrapper[7776]: W0219 03:22:36.235901 7776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3fab5bbd_672c_4e18_9c1e_438e2360bc54.slice/crio-3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318 WatchSource:0}: Error finding container 3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318: Status 404 returned error can't find the container with id 3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318 Feb 19 03:22:36.276200 master-0 kubenswrapper[7776]: I0219 03:22:36.274066 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:22:36.286715 master-0 kubenswrapper[7776]: I0219 03:22:36.284647 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 19 03:22:36.291234 master-0 kubenswrapper[7776]: I0219 03:22:36.291117 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:22:36.421046 master-0 kubenswrapper[7776]: I0219 03:22:36.420963 7776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:22:36.424040 master-0 kubenswrapper[7776]: I0219 03:22:36.424004 7776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 19 03:22:36.474937 master-0 kubenswrapper[7776]: I0219 03:22:36.474899 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3fab5bbd-672c-4e18-9c1e-438e2360bc54","Type":"ContainerStarted","Data":"3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318"} Feb 19 03:22:36.764068 master-0 kubenswrapper[7776]: I0219 03:22:36.763976 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:36.764068 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:36.764068 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:36.764068 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:36.764068 master-0 kubenswrapper[7776]: I0219 03:22:36.764054 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:37.485048 master-0 kubenswrapper[7776]: I0219 03:22:37.484944 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3fab5bbd-672c-4e18-9c1e-438e2360bc54","Type":"ContainerStarted","Data":"efd7a12795a097f3f4ab229c7e4cfe83afd7b3d6586c831bcff29d6a1d12a9eb"} Feb 19 03:22:37.513608 master-0 kubenswrapper[7776]: I0219 03:22:37.513461 7776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=274.513430964 podStartE2EDuration="4m34.513430964s" podCreationTimestamp="2026-02-19 03:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:22:37.508175786 +0000 UTC m=+1063.847860334" watchObservedRunningTime="2026-02-19 03:22:37.513430964 +0000 UTC m=+1063.853115512" Feb 19 03:22:37.764400 master-0 kubenswrapper[7776]: I0219 03:22:37.764145 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:37.764400 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:37.764400 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:37.764400 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:37.764400 master-0 kubenswrapper[7776]: I0219 03:22:37.764299 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:37.853114 master-0 kubenswrapper[7776]: I0219 03:22:37.853021 7776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" path="/var/lib/kubelet/pods/4aef097d-bea5-404d-b26b-aed9142ddf14/volumes" Feb 19 03:22:38.763489 master-0 kubenswrapper[7776]: I0219 03:22:38.763396 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:38.763489 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:38.763489 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:38.763489 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:38.763489 master-0 kubenswrapper[7776]: I0219 03:22:38.763463 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:38.842593 master-0 kubenswrapper[7776]: I0219 03:22:38.842539 7776 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:22:38.843180 master-0 kubenswrapper[7776]: E0219 03:22:38.842891 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:38.994641 master-0 kubenswrapper[7776]: E0219 03:22:38.994534 7776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:22:39.764294 master-0 kubenswrapper[7776]: I0219 03:22:39.764170 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:39.764294 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:39.764294 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:39.764294 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:39.764749 master-0 kubenswrapper[7776]: I0219 03:22:39.764327 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:40.764375 master-0 kubenswrapper[7776]: I0219 03:22:40.764251 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:40.764375 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:40.764375 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:40.764375 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:40.765471 master-0 kubenswrapper[7776]: I0219 03:22:40.764390 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:41.764041 master-0 kubenswrapper[7776]: I0219 03:22:41.763937 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:41.764041 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:41.764041 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:41.764041 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:41.765111 master-0 kubenswrapper[7776]: I0219 03:22:41.764064 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:42.765013 master-0 kubenswrapper[7776]: I0219 03:22:42.764907 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:42.765013 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:42.765013 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:42.765013 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:42.765734 master-0 kubenswrapper[7776]: I0219 03:22:42.765023 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:43.764693 master-0 kubenswrapper[7776]: I0219 03:22:43.764615 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:43.764693 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:43.764693 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:43.764693 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:43.765076 master-0 kubenswrapper[7776]: I0219 03:22:43.764751 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:43.849489 master-0 kubenswrapper[7776]: I0219 03:22:43.849393 7776 scope.go:117] "RemoveContainer" containerID="ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b" Feb 19 03:22:44.546195 master-0 kubenswrapper[7776]: I0219 03:22:44.546101 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/4.log" Feb 19 03:22:44.547004 master-0 kubenswrapper[7776]: I0219 03:22:44.546944 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f"} Feb 19 03:22:44.547341 master-0 kubenswrapper[7776]: I0219 03:22:44.547296 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:22:44.763172 master-0 kubenswrapper[7776]: I0219 03:22:44.763014 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:44.763172 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:44.763172 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:44.763172 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:44.763172 master-0 kubenswrapper[7776]: I0219 03:22:44.763087 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:44.842916 master-0 kubenswrapper[7776]: I0219 03:22:44.842834 7776 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:22:44.843808 master-0 kubenswrapper[7776]: E0219 03:22:44.843164 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:22:45.764633 master-0 kubenswrapper[7776]: I0219 03:22:45.764518 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:45.764633 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:45.764633 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:45.764633 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:45.764968 master-0 kubenswrapper[7776]: I0219 03:22:45.764637 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:46.568599 master-0 kubenswrapper[7776]: I0219 03:22:46.568493 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/2.log" Feb 19 03:22:46.573587 master-0 kubenswrapper[7776]: I0219 03:22:46.573519 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/1.log" Feb 19 03:22:46.574208 master-0 kubenswrapper[7776]: I0219 03:22:46.574136 7776 generic.go:334] "Generic (PLEG): container finished" podID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" containerID="0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1" exitCode=1 Feb 19 03:22:46.574350 master-0 kubenswrapper[7776]: I0219 03:22:46.574217 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerDied","Data":"0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1"} Feb 19 03:22:46.574432 master-0 kubenswrapper[7776]: I0219 03:22:46.574346 7776 scope.go:117] "RemoveContainer" containerID="675b0788e605256106684c4e377b174ce97f9e7a35c1265d0f37c4603a7e545a" Feb 19 03:22:46.575310 master-0 kubenswrapper[7776]: I0219 03:22:46.575227 7776 scope.go:117] "RemoveContainer" containerID="0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1" Feb 19 03:22:46.575832 master-0 kubenswrapper[7776]: E0219 03:22:46.575755 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-9vgg7_openshift-machine-api(af5828ea-090f-4c8f-90e6-c4e405e69ec5)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" podUID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" Feb 19 03:22:46.769708 master-0 kubenswrapper[7776]: I0219 03:22:46.769576 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:46.769708 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:46.769708 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:46.769708 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:46.769708 master-0 kubenswrapper[7776]: I0219 03:22:46.769662 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:47.585171 master-0 kubenswrapper[7776]: I0219 03:22:47.585113 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/2.log" Feb 19 03:22:47.784159 master-0 kubenswrapper[7776]: I0219 03:22:47.784065 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:47.784159 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:47.784159 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:47.784159 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:47.784585 master-0 kubenswrapper[7776]: I0219 03:22:47.784527 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:47.894105 master-0 kubenswrapper[7776]: I0219 03:22:47.893994 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:47.894391 master-0 kubenswrapper[7776]: I0219 03:22:47.894100 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:47.921151 master-0 kubenswrapper[7776]: I0219 03:22:47.921081 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:47.921366 master-0 kubenswrapper[7776]: I0219 03:22:47.921170 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:48.764127 master-0 kubenswrapper[7776]: I0219 03:22:48.764023 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:48.764127 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:48.764127 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:48.764127 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:48.764127 master-0 kubenswrapper[7776]: I0219 03:22:48.764125 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:49.234735 master-0 kubenswrapper[7776]: E0219 03:22:49.234641 7776 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 19 03:22:49.605241 master-0 kubenswrapper[7776]: I0219 03:22:49.605053 7776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:22:49.605241 master-0 kubenswrapper[7776]: I0219 03:22:49.605108 7776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="25b91de2-7cf4-4017-93c3-740165b3c38d" Feb 19 03:22:49.765017 master-0 kubenswrapper[7776]: I0219 03:22:49.764903 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:49.765017 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:49.765017 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:49.765017 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:49.765947 master-0 kubenswrapper[7776]: I0219 03:22:49.765023 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:50.765163 master-0 kubenswrapper[7776]: I0219 03:22:50.765059 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:50.765163 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:50.765163 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:50.765163 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:50.766552 master-0 kubenswrapper[7776]: I0219 03:22:50.765169 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:50.893826 master-0 kubenswrapper[7776]: I0219 03:22:50.893700 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:50.894153 master-0 kubenswrapper[7776]: I0219 03:22:50.893830 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:50.920933 master-0 kubenswrapper[7776]: I0219 03:22:50.920821 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:50.920933 master-0 kubenswrapper[7776]: I0219 03:22:50.920909 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:51.764573 master-0 kubenswrapper[7776]: I0219 03:22:51.764492 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:51.764573 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:51.764573 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:51.764573 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:51.764912 master-0 kubenswrapper[7776]: I0219 03:22:51.764603 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:52.764640 master-0 kubenswrapper[7776]: I0219 03:22:52.764590 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:52.764640 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:52.764640 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:52.764640 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:52.765341 master-0 kubenswrapper[7776]: I0219 03:22:52.765310 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:53.764196 master-0 kubenswrapper[7776]: I0219 03:22:53.764143 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:53.764196 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:53.764196 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:53.764196 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:53.764597 master-0 kubenswrapper[7776]: I0219 03:22:53.764203 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:53.847821 master-0 kubenswrapper[7776]: I0219 03:22:53.847772 7776 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:22:53.894025 master-0 kubenswrapper[7776]: I0219 03:22:53.893940 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:53.894158 master-0 kubenswrapper[7776]: I0219 03:22:53.894035 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:53.894158 master-0 kubenswrapper[7776]: I0219 03:22:53.894104 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:22:53.895030 master-0 kubenswrapper[7776]: I0219 03:22:53.894998 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:53.895123 master-0 kubenswrapper[7776]: I0219 03:22:53.895049 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:53.895318 master-0 kubenswrapper[7776]: I0219 03:22:53.895231 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:22:53.895389 master-0 kubenswrapper[7776]: I0219 03:22:53.895354 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f" gracePeriod=30 Feb 19 03:22:54.121451 master-0 kubenswrapper[7776]: I0219 03:22:54.121381 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:55150->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:22:54.121737 master-0 kubenswrapper[7776]: I0219 03:22:54.121510 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:55150->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:22:54.641675 master-0 kubenswrapper[7776]: I0219 03:22:54.641609 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/5.log" Feb 19 03:22:54.642124 master-0 kubenswrapper[7776]: I0219 03:22:54.642072 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/4.log" Feb 19 03:22:54.642625 master-0 kubenswrapper[7776]: I0219 03:22:54.642589 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f" exitCode=255 Feb 19 03:22:54.642785 master-0 kubenswrapper[7776]: I0219 03:22:54.642652 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f"} Feb 19 03:22:54.642785 master-0 kubenswrapper[7776]: I0219 03:22:54.642698 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239"} Feb 19 03:22:54.642785 master-0 kubenswrapper[7776]: I0219 03:22:54.642716 7776 scope.go:117] "RemoveContainer" containerID="ddee836f9c0dc9034253cedc04036772aecd6f69ed2b7269a37262fb2f962f4b" Feb 19 03:22:54.643044 master-0 kubenswrapper[7776]: I0219 03:22:54.642912 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:22:54.645998 master-0 kubenswrapper[7776]: I0219 03:22:54.645941 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:22:54.647428 master-0 kubenswrapper[7776]: I0219 03:22:54.647377 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:22:54.647594 master-0 kubenswrapper[7776]: I0219 03:22:54.647447 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} Feb 19 03:22:54.764029 master-0 kubenswrapper[7776]: I0219 03:22:54.763951 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:54.764029 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:54.764029 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:54.764029 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:54.764385 master-0 kubenswrapper[7776]: I0219 03:22:54.764043 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:55.659567 master-0 kubenswrapper[7776]: I0219 03:22:55.659520 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/5.log" Feb 19 03:22:55.764250 master-0 kubenswrapper[7776]: I0219 03:22:55.764129 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:55.764250 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:55.764250 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:55.764250 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:55.765181 master-0 kubenswrapper[7776]: I0219 03:22:55.764300 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:55.843028 master-0 kubenswrapper[7776]: I0219 03:22:55.842941 7776 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:22:55.843301 master-0 kubenswrapper[7776]: E0219 03:22:55.843274 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:22:56.763946 master-0 kubenswrapper[7776]: I0219 03:22:56.763838 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:56.763946 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:56.763946 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:56.763946 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:56.763946 master-0 kubenswrapper[7776]: I0219 03:22:56.763919 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:56.916209 master-0 kubenswrapper[7776]: I0219 03:22:56.916149 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:22:57.763835 master-0 kubenswrapper[7776]: I0219 03:22:57.763746 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:57.763835 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:57.763835 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:57.763835 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:57.763835 master-0 kubenswrapper[7776]: I0219 03:22:57.763837 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:58.684292 master-0 kubenswrapper[7776]: I0219 03:22:58.684206 7776 generic.go:334] "Generic (PLEG): container finished" podID="61abb34a-08f0-4438-9a89-c712b2048878" containerID="e967e4bdcd17904293fe64ffaea6f290221329babeb23091aec673f02b8e7ca3" exitCode=0 Feb 19 03:22:58.684561 master-0 kubenswrapper[7776]: I0219 03:22:58.684314 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerDied","Data":"e967e4bdcd17904293fe64ffaea6f290221329babeb23091aec673f02b8e7ca3"} Feb 19 03:22:58.684561 master-0 kubenswrapper[7776]: I0219 03:22:58.684402 7776 scope.go:117] "RemoveContainer" containerID="0433548866cd3801c8b397fe3536ec33408d7af2a4a96c584b21e1d45a8f492e" Feb 19 03:22:58.685096 master-0 kubenswrapper[7776]: I0219 03:22:58.685071 7776 scope.go:117] "RemoveContainer" containerID="e967e4bdcd17904293fe64ffaea6f290221329babeb23091aec673f02b8e7ca3" Feb 19 03:22:58.687111 master-0 kubenswrapper[7776]: I0219 03:22:58.687082 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:22:58.687972 master-0 kubenswrapper[7776]: I0219 03:22:58.687936 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/1.log" Feb 19 03:22:58.689685 master-0 kubenswrapper[7776]: I0219 03:22:58.689637 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/0.log" Feb 19 03:22:58.689761 master-0 kubenswrapper[7776]: I0219 03:22:58.689714 7776 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" exitCode=1 Feb 19 03:22:58.689806 master-0 kubenswrapper[7776]: I0219 03:22:58.689755 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} Feb 19 03:22:58.690506 master-0 kubenswrapper[7776]: I0219 03:22:58.690469 7776 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:22:58.690966 master-0 kubenswrapper[7776]: E0219 03:22:58.690919 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:22:58.723602 master-0 kubenswrapper[7776]: I0219 03:22:58.723567 7776 scope.go:117] "RemoveContainer" containerID="6e39b4ae8e2c1020e55e9a8991002fceb2451697ce51c87e07c50c9ac50db7bc" Feb 19 03:22:58.763319 master-0 kubenswrapper[7776]: I0219 03:22:58.763235 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:58.763319 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:58.763319 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:58.763319 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:58.763652 master-0 kubenswrapper[7776]: I0219 03:22:58.763339 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:58.843386 master-0 kubenswrapper[7776]: I0219 03:22:58.843034 7776 scope.go:117] "RemoveContainer" containerID="0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1" Feb 19 03:22:58.843386 master-0 kubenswrapper[7776]: E0219 03:22:58.843339 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-9vgg7_openshift-machine-api(af5828ea-090f-4c8f-90e6-c4e405e69ec5)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" podUID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" Feb 19 03:22:59.698516 master-0 kubenswrapper[7776]: I0219 03:22:59.698416 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" event={"ID":"61abb34a-08f0-4438-9a89-c712b2048878","Type":"ContainerStarted","Data":"201a9b8ec5711bf432dddc5e92c0e4963940027737a32b1ee622dcb47d78f894"} Feb 19 03:22:59.700821 master-0 kubenswrapper[7776]: I0219 03:22:59.700781 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:22:59.701628 master-0 kubenswrapper[7776]: I0219 03:22:59.701574 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/1.log" Feb 19 03:22:59.763186 master-0 kubenswrapper[7776]: I0219 03:22:59.763109 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:22:59.763186 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:22:59.763186 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:22:59.763186 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:22:59.763588 master-0 kubenswrapper[7776]: I0219 03:22:59.763199 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:22:59.893688 master-0 kubenswrapper[7776]: I0219 03:22:59.893597 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:59.894402 master-0 kubenswrapper[7776]: I0219 03:22:59.893725 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:22:59.920698 master-0 kubenswrapper[7776]: I0219 03:22:59.920636 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:22:59.920940 master-0 kubenswrapper[7776]: I0219 03:22:59.920718 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:23:00.763044 master-0 kubenswrapper[7776]: I0219 03:23:00.762983 7776 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:23:00.763044 master-0 kubenswrapper[7776]: [-]has-synced failed: reason withheld Feb 19 03:23:00.763044 master-0 kubenswrapper[7776]: [+]process-running ok Feb 19 03:23:00.763044 master-0 kubenswrapper[7776]: healthz check failed Feb 19 03:23:00.763343 master-0 kubenswrapper[7776]: I0219 03:23:00.763045 7776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:23:00.763343 master-0 kubenswrapper[7776]: I0219 03:23:00.763090 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:23:00.763751 master-0 kubenswrapper[7776]: I0219 03:23:00.763716 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"882c525babc52c3119968e9793962f24892225613582692392aa79601c39660e"} pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" containerMessage="Container router failed startup probe, will be restarted" Feb 19 03:23:00.763797 master-0 kubenswrapper[7776]: I0219 03:23:00.763763 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" containerID="cri-o://882c525babc52c3119968e9793962f24892225613582692392aa79601c39660e" gracePeriod=3600 Feb 19 03:23:02.893922 master-0 kubenswrapper[7776]: I0219 03:23:02.893844 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:23:02.896682 master-0 kubenswrapper[7776]: I0219 03:23:02.893952 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:23:02.920770 master-0 kubenswrapper[7776]: I0219 03:23:02.920700 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:23:02.920961 master-0 kubenswrapper[7776]: I0219 03:23:02.920785 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:23:03.688828 master-0 kubenswrapper[7776]: I0219 03:23:03.688721 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.688828 master-0 kubenswrapper[7776]: I0219 03:23:03.688832 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.688828 master-0 kubenswrapper[7776]: I0219 03:23:03.688844 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.689156 master-0 kubenswrapper[7776]: I0219 03:23:03.688855 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.689156 master-0 kubenswrapper[7776]: I0219 03:23:03.688864 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.691401 master-0 kubenswrapper[7776]: I0219 03:23:03.689712 7776 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:23:03.691401 master-0 kubenswrapper[7776]: E0219 03:23:03.689958 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:23:03.695450 master-0 kubenswrapper[7776]: I0219 03:23:03.695386 7776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:03.731801 master-0 kubenswrapper[7776]: I0219 03:23:03.731723 7776 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:23:03.732271 master-0 kubenswrapper[7776]: E0219 03:23:03.732207 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" Feb 19 03:23:05.893938 master-0 kubenswrapper[7776]: I0219 03:23:05.893881 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:23:05.894770 master-0 kubenswrapper[7776]: I0219 03:23:05.893949 7776 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:23:05.894770 master-0 kubenswrapper[7776]: I0219 03:23:05.893996 7776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:23:05.894877 master-0 kubenswrapper[7776]: I0219 03:23:05.894818 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 19 03:23:05.895002 master-0 kubenswrapper[7776]: I0219 03:23:05.894835 7776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:23:05.895273 master-0 kubenswrapper[7776]: I0219 03:23:05.894900 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 19 03:23:05.895357 master-0 kubenswrapper[7776]: I0219 03:23:05.895296 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" gracePeriod=30 Feb 19 03:23:06.358768 master-0 kubenswrapper[7776]: I0219 03:23:06.358643 7776 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:49768->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:23:06.358768 master-0 kubenswrapper[7776]: I0219 03:23:06.358723 7776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:49768->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:23:06.411060 master-0 kubenswrapper[7776]: E0219 03:23:06.410971 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:23:06.755909 master-0 kubenswrapper[7776]: I0219 03:23:06.755852 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/6.log" Feb 19 03:23:06.756593 master-0 kubenswrapper[7776]: I0219 03:23:06.756515 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/5.log" Feb 19 03:23:06.756956 master-0 kubenswrapper[7776]: I0219 03:23:06.756910 7776 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" exitCode=255 Feb 19 03:23:06.757017 master-0 kubenswrapper[7776]: I0219 03:23:06.756964 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239"} Feb 19 03:23:06.757017 master-0 kubenswrapper[7776]: I0219 03:23:06.757006 7776 scope.go:117] "RemoveContainer" containerID="4afcad8623824e3c9325900731d2991e791a526d8e32b1849fcfa662d04ef55f" Feb 19 03:23:06.758289 master-0 kubenswrapper[7776]: I0219 03:23:06.758221 7776 scope.go:117] "RemoveContainer" containerID="92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" Feb 19 03:23:06.758739 master-0 kubenswrapper[7776]: E0219 03:23:06.758687 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:23:07.764722 master-0 kubenswrapper[7776]: I0219 03:23:07.764577 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/6.log" Feb 19 03:23:07.842516 master-0 kubenswrapper[7776]: I0219 03:23:07.842467 7776 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:23:07.842777 master-0 kubenswrapper[7776]: E0219 03:23:07.842744 7776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:23:12.844330 master-0 kubenswrapper[7776]: I0219 03:23:12.844235 7776 scope.go:117] "RemoveContainer" containerID="0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1" Feb 19 03:23:13.697413 master-0 kubenswrapper[7776]: I0219 03:23:13.695419 7776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:13.697880 master-0 kubenswrapper[7776]: I0219 03:23:13.697837 7776 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:23:13.813662 master-0 kubenswrapper[7776]: I0219 03:23:13.813608 7776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/2.log" Feb 19 03:23:13.813969 master-0 kubenswrapper[7776]: I0219 03:23:13.813927 7776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" event={"ID":"af5828ea-090f-4c8f-90e6-c4e405e69ec5","Type":"ContainerStarted","Data":"2cb5cc3e5f7f2fc2fc859c76106fa10fdf219cbe5f366d8a4a4e6d5405fb400e"} Feb 19 03:23:14.023712 master-0 kubenswrapper[7776]: I0219 03:23:14.023549 7776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: E0219 03:23:14.023911 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.023930 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: E0219 03:23:14.023948 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.023957 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: E0219 03:23:14.023991 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024000 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: E0219 03:23:14.024018 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024025 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024164 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024186 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aef097d-bea5-404d-b26b-aed9142ddf14" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024203 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:23:14.024241 master-0 kubenswrapper[7776]: I0219 03:23:14.024218 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:23:14.024781 master-0 kubenswrapper[7776]: I0219 03:23:14.024742 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.028574 master-0 kubenswrapper[7776]: I0219 03:23:14.028527 7776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-rqfgf" Feb 19 03:23:14.028574 master-0 kubenswrapper[7776]: I0219 03:23:14.028563 7776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 19 03:23:14.032220 master-0 kubenswrapper[7776]: I0219 03:23:14.032158 7776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Feb 19 03:23:14.165092 master-0 kubenswrapper[7776]: I0219 03:23:14.164994 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.165341 master-0 kubenswrapper[7776]: I0219 03:23:14.165147 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.165341 master-0 kubenswrapper[7776]: I0219 03:23:14.165228 7776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.266909 master-0 kubenswrapper[7776]: I0219 03:23:14.266816 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.267212 master-0 kubenswrapper[7776]: I0219 03:23:14.266930 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.267212 master-0 kubenswrapper[7776]: I0219 03:23:14.266984 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.267212 master-0 kubenswrapper[7776]: I0219 03:23:14.267034 7776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.267212 master-0 kubenswrapper[7776]: I0219 03:23:14.267196 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.297084 master-0 kubenswrapper[7776]: I0219 03:23:14.296912 7776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.361065 master-0 kubenswrapper[7776]: I0219 03:23:14.360988 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:23:14.422851 master-0 kubenswrapper[7776]: I0219 03:23:14.422775 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 19 03:23:14.424015 master-0 kubenswrapper[7776]: I0219 03:23:14.423969 7776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.424298 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://d18413342a722838be3aeba368600d701226af1bb0655a2558eb4a099c9c2796" gracePeriod=15 Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.424458 7776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://82a40f80e34c4f63706840b48b0aa48486b2ad68c13d50974f11a3442433c7ea" gracePeriod=15 Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.425060 7776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.425249 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: E0219 03:23:14.425345 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.425371 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: E0219 03:23:14.425426 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.425437 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: E0219 03:23:14.425456 7776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:23:14.425523 master-0 kubenswrapper[7776]: I0219 03:23:14.425467 7776 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:23:14.426248 master-0 kubenswrapper[7776]: I0219 03:23:14.425654 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:23:14.426248 master-0 kubenswrapper[7776]: I0219 03:23:14.425679 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:23:14.426248 master-0 kubenswrapper[7776]: I0219 03:23:14.425695 7776 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:23:14.428609 master-0 kubenswrapper[7776]: I0219 03:23:14.428439 7776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:14.529913 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 19 03:23:14.559021 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 19 03:23:14.559447 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 19 03:23:14.569027 master-0 systemd[1]: kubelet.service: Consumed 2min 31.488s CPU time. Feb 19 03:23:14.608055 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 19 03:23:14.770553 master-0 kubenswrapper[33867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:23:14.771501 master-0 kubenswrapper[33867]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 03:23:14.771625 master-0 kubenswrapper[33867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:23:14.771757 master-0 kubenswrapper[33867]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:23:14.771863 master-0 kubenswrapper[33867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 03:23:14.772002 master-0 kubenswrapper[33867]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 03:23:14.772452 master-0 kubenswrapper[33867]: I0219 03:23:14.772340 33867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 03:23:14.777547 master-0 kubenswrapper[33867]: W0219 03:23:14.777516 33867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:23:14.777728 master-0 kubenswrapper[33867]: W0219 03:23:14.777706 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:23:14.777850 master-0 kubenswrapper[33867]: W0219 03:23:14.777831 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:23:14.777988 master-0 kubenswrapper[33867]: W0219 03:23:14.777967 33867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:23:14.778110 master-0 kubenswrapper[33867]: W0219 03:23:14.778092 33867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:23:14.778231 master-0 kubenswrapper[33867]: W0219 03:23:14.778213 33867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:23:14.778524 master-0 kubenswrapper[33867]: W0219 03:23:14.778461 33867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:23:14.778524 master-0 kubenswrapper[33867]: W0219 03:23:14.778501 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:23:14.778524 master-0 kubenswrapper[33867]: W0219 03:23:14.778511 33867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:23:14.778524 master-0 kubenswrapper[33867]: W0219 03:23:14.778520 33867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:23:14.778524 master-0 kubenswrapper[33867]: W0219 03:23:14.778529 33867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778538 33867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778553 33867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778565 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778574 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778584 33867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778612 33867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778623 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778632 33867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778640 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778649 33867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778658 33867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778667 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778676 33867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778685 33867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778693 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778701 33867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778712 33867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778722 33867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:23:14.778841 master-0 kubenswrapper[33867]: W0219 03:23:14.778730 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778738 33867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778758 33867 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778766 33867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778775 33867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778782 33867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778790 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778855 33867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778866 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778874 33867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778883 33867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778891 33867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778900 33867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778908 33867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778916 33867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778924 33867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778932 33867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778939 33867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778947 33867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778956 33867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:23:14.780312 master-0 kubenswrapper[33867]: W0219 03:23:14.778964 33867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.778972 33867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.778979 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.778988 33867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.778996 33867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779004 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779012 33867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779023 33867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779033 33867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779068 33867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779077 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779084 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779093 33867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779102 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779110 33867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779118 33867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779126 33867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779134 33867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779142 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:23:14.782351 master-0 kubenswrapper[33867]: W0219 03:23:14.779150 33867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: W0219 03:23:14.779163 33867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: W0219 03:23:14.779173 33867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: W0219 03:23:14.779184 33867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779377 33867 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779397 33867 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779413 33867 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779425 33867 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779437 33867 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779447 33867 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779459 33867 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779470 33867 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779480 33867 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779489 33867 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779499 33867 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779509 33867 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779518 33867 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779527 33867 flags.go:64] FLAG: --cgroup-root="" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779536 33867 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779545 33867 flags.go:64] FLAG: --client-ca-file="" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779556 33867 flags.go:64] FLAG: --cloud-config="" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779565 33867 flags.go:64] FLAG: --cloud-provider="" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779574 33867 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779586 33867 flags.go:64] FLAG: --cluster-domain="" Feb 19 03:23:14.783545 master-0 kubenswrapper[33867]: I0219 03:23:14.779595 33867 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779607 33867 flags.go:64] FLAG: --config-dir="" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779616 33867 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779626 33867 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779638 33867 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779647 33867 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779656 33867 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779666 33867 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779676 33867 flags.go:64] FLAG: --contention-profiling="false" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779684 33867 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779693 33867 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779703 33867 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779711 33867 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779723 33867 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779732 33867 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779741 33867 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779750 33867 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779759 33867 flags.go:64] FLAG: --enable-server="true" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779768 33867 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779780 33867 flags.go:64] FLAG: --event-burst="100" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779789 33867 flags.go:64] FLAG: --event-qps="50" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779798 33867 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779807 33867 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779816 33867 flags.go:64] FLAG: --eviction-hard="" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779827 33867 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 03:23:14.787789 master-0 kubenswrapper[33867]: I0219 03:23:14.779836 33867 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779845 33867 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779854 33867 flags.go:64] FLAG: --eviction-soft="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779863 33867 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779872 33867 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779882 33867 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779891 33867 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779901 33867 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779910 33867 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779919 33867 flags.go:64] FLAG: --feature-gates="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779930 33867 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779939 33867 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779948 33867 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779957 33867 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779967 33867 flags.go:64] FLAG: --healthz-port="10248" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779976 33867 flags.go:64] FLAG: --help="false" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779985 33867 flags.go:64] FLAG: --hostname-override="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.779994 33867 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780003 33867 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780012 33867 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780021 33867 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780032 33867 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780041 33867 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780050 33867 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780059 33867 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 03:23:14.789244 master-0 kubenswrapper[33867]: I0219 03:23:14.780068 33867 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780077 33867 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780086 33867 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780097 33867 flags.go:64] FLAG: --kube-reserved="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780106 33867 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780115 33867 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780124 33867 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780133 33867 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780142 33867 flags.go:64] FLAG: --lock-file="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780151 33867 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780161 33867 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780170 33867 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780184 33867 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780193 33867 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780202 33867 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780211 33867 flags.go:64] FLAG: --logging-format="text" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780220 33867 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780229 33867 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780241 33867 flags.go:64] FLAG: --manifest-url="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780250 33867 flags.go:64] FLAG: --manifest-url-header="" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780284 33867 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780294 33867 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780305 33867 flags.go:64] FLAG: --max-pods="110" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780314 33867 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780324 33867 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 03:23:14.791852 master-0 kubenswrapper[33867]: I0219 03:23:14.780333 33867 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780343 33867 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780352 33867 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780361 33867 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780371 33867 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780392 33867 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780402 33867 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780411 33867 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780420 33867 flags.go:64] FLAG: --pod-cidr="" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780429 33867 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780445 33867 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780454 33867 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780463 33867 flags.go:64] FLAG: --pods-per-core="0" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780472 33867 flags.go:64] FLAG: --port="10250" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780481 33867 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780490 33867 flags.go:64] FLAG: --provider-id="" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780499 33867 flags.go:64] FLAG: --qos-reserved="" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780508 33867 flags.go:64] FLAG: --read-only-port="10255" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780517 33867 flags.go:64] FLAG: --register-node="true" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780526 33867 flags.go:64] FLAG: --register-schedulable="true" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780535 33867 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780551 33867 flags.go:64] FLAG: --registry-burst="10" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780560 33867 flags.go:64] FLAG: --registry-qps="5" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780569 33867 flags.go:64] FLAG: --reserved-cpus="" Feb 19 03:23:14.794078 master-0 kubenswrapper[33867]: I0219 03:23:14.780578 33867 flags.go:64] FLAG: --reserved-memory="" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780589 33867 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780600 33867 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780658 33867 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780668 33867 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780681 33867 flags.go:64] FLAG: --runonce="false" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780694 33867 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780707 33867 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780719 33867 flags.go:64] FLAG: --seccomp-default="false" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780731 33867 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780742 33867 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780754 33867 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780766 33867 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780778 33867 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780790 33867 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780801 33867 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780810 33867 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780819 33867 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780829 33867 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780838 33867 flags.go:64] FLAG: --system-cgroups="" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780847 33867 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780862 33867 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780871 33867 flags.go:64] FLAG: --tls-cert-file="" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780880 33867 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780933 33867 flags.go:64] FLAG: --tls-min-version="" Feb 19 03:23:14.795823 master-0 kubenswrapper[33867]: I0219 03:23:14.780942 33867 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.780952 33867 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.780961 33867 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.780971 33867 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.780980 33867 flags.go:64] FLAG: --v="2" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.780992 33867 flags.go:64] FLAG: --version="false" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.781004 33867 flags.go:64] FLAG: --vmodule="" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.781015 33867 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: I0219 03:23:14.781024 33867 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781243 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781281 33867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781292 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781301 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781309 33867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781318 33867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781326 33867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781335 33867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781343 33867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781351 33867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781359 33867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781367 33867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:23:14.797724 master-0 kubenswrapper[33867]: W0219 03:23:14.781376 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781386 33867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781396 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781404 33867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781419 33867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781428 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781436 33867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781443 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781451 33867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781459 33867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781467 33867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781475 33867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781482 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781490 33867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781498 33867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781506 33867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781517 33867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781528 33867 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781536 33867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:23:14.799213 master-0 kubenswrapper[33867]: W0219 03:23:14.781547 33867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781556 33867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781565 33867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781574 33867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781583 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781591 33867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781600 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781608 33867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781616 33867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781625 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781633 33867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781642 33867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781651 33867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781659 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781667 33867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781675 33867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781683 33867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781694 33867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781702 33867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781710 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:23:14.801049 master-0 kubenswrapper[33867]: W0219 03:23:14.781718 33867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781726 33867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781733 33867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781741 33867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781749 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781757 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781765 33867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781773 33867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781780 33867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781788 33867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781796 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781805 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781815 33867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781825 33867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781835 33867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781849 33867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781862 33867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781873 33867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781885 33867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781897 33867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:23:14.801867 master-0 kubenswrapper[33867]: W0219 03:23:14.781908 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: I0219 03:23:14.781935 33867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: I0219 03:23:14.789529 33867 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: I0219 03:23:14.789582 33867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789872 33867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789888 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789898 33867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789910 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789919 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789927 33867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789937 33867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789946 33867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789956 33867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789966 33867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789975 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:23:14.803022 master-0 kubenswrapper[33867]: W0219 03:23:14.789993 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790002 33867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790011 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790106 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790140 33867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790149 33867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790158 33867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790167 33867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790175 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790189 33867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790199 33867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790332 33867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.790344 33867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791393 33867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791413 33867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791422 33867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791431 33867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791439 33867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791448 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:23:14.804308 master-0 kubenswrapper[33867]: W0219 03:23:14.791457 33867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791470 33867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791481 33867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791490 33867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791499 33867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791507 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791519 33867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791529 33867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791537 33867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791545 33867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791553 33867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791561 33867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791569 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791577 33867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791585 33867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791594 33867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791602 33867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791610 33867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791618 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791627 33867 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:23:14.805360 master-0 kubenswrapper[33867]: W0219 03:23:14.791635 33867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791646 33867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791657 33867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791668 33867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791679 33867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791687 33867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791696 33867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791704 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791712 33867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791722 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791731 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791739 33867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791747 33867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791755 33867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791763 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791771 33867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791779 33867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791787 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791795 33867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:23:14.806032 master-0 kubenswrapper[33867]: W0219 03:23:14.791803 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.791811 33867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.791820 33867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: I0219 03:23:14.791835 33867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792087 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792099 33867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792108 33867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792117 33867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792125 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792134 33867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792145 33867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792155 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792166 33867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792175 33867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792184 33867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 03:23:14.806939 master-0 kubenswrapper[33867]: W0219 03:23:14.792194 33867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792202 33867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792211 33867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792220 33867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792230 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792238 33867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792246 33867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792290 33867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792300 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792308 33867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792316 33867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792323 33867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792333 33867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792341 33867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792349 33867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792357 33867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792365 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792372 33867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792380 33867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792388 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 03:23:14.807837 master-0 kubenswrapper[33867]: W0219 03:23:14.792396 33867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792406 33867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792416 33867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792425 33867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792434 33867 feature_gate.go:330] unrecognized feature gate: Example Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792443 33867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792451 33867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792460 33867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792468 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792478 33867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792488 33867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792497 33867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792507 33867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792515 33867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792523 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792531 33867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792539 33867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792549 33867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792559 33867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 03:23:14.808769 master-0 kubenswrapper[33867]: W0219 03:23:14.792568 33867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792577 33867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792585 33867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792593 33867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792601 33867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792609 33867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792617 33867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792625 33867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792633 33867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792640 33867 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792648 33867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792656 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792664 33867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792672 33867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792680 33867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792688 33867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792696 33867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792704 33867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792712 33867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 03:23:14.809904 master-0 kubenswrapper[33867]: W0219 03:23:14.792720 33867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: W0219 03:23:14.792728 33867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: W0219 03:23:14.792736 33867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.792750 33867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.793023 33867 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.795884 33867 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.796023 33867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.796443 33867 server.go:997] "Starting client certificate rotation" Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.796510 33867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.796714 33867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 21:06:04.507098756 +0000 UTC Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.796799 33867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h42m49.71030545s for next certificate rotation Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.797634 33867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:23:14.810952 master-0 kubenswrapper[33867]: I0219 03:23:14.799988 33867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 03:23:14.811548 master-0 kubenswrapper[33867]: I0219 03:23:14.802713 33867 log.go:25] "Validated CRI v1 runtime API" Feb 19 03:23:14.811548 master-0 kubenswrapper[33867]: I0219 03:23:14.806636 33867 log.go:25] "Validated CRI v1 image API" Feb 19 03:23:14.811548 master-0 kubenswrapper[33867]: I0219 03:23:14.807820 33867 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 03:23:14.822053 master-0 kubenswrapper[33867]: I0219 03:23:14.821981 33867 fs.go:135] Filesystem UUIDs: map[4837cee5-4017-4a37-b994-9fb38a99ee26:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 19 03:23:14.823405 master-0 kubenswrapper[33867]: I0219 03:23:14.822030 33867 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm major:0 minor:141 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1082261815c7e19c2e96bf70a147ae8ad719192a52e2b659efb185314dc947a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1082261815c7e19c2e96bf70a147ae8ad719192a52e2b659efb185314dc947a8/userdata/shm major:0 minor:457 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1760667bc1ae6e6c0373f38881f9d459051273b2be065a4f5aefaa03ffb1434b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1760667bc1ae6e6c0373f38881f9d459051273b2be065a4f5aefaa03ffb1434b/userdata/shm major:0 minor:586 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm major:0 minor:140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1be6fbce0be2d2a600566ad7a089efc0d76906ae49f8bc93720c22ae930e1161/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1be6fbce0be2d2a600566ad7a089efc0d76906ae49f8bc93720c22ae930e1161/userdata/shm major:0 minor:1180 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada/userdata/shm major:0 minor:937 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff/userdata/shm major:0 minor:754 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e210c3c8004e773a0bdb2dc099fdf8b85ea7ff84b49ad9f3a84bc8f3cd8ea30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e210c3c8004e773a0bdb2dc099fdf8b85ea7ff84b49ad9f3a84bc8f3cd8ea30/userdata/shm major:0 minor:1254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e6d01c66ad4ba09830602801e48d0eb21df8043e491a9222312021d0c71dccd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e6d01c66ad4ba09830602801e48d0eb21df8043e491a9222312021d0c71dccd/userdata/shm major:0 minor:1165 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6/userdata/shm major:0 minor:767 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37b14f21eea6ae068c6ab319848a3075fde8aacf4bdcecd0e6ca1c48ebc11e9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37b14f21eea6ae068c6ab319848a3075fde8aacf4bdcecd0e6ca1c48ebc11e9a/userdata/shm major:0 minor:475 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/383b491b9f27144fe9b7a96c0308977fdc414552864afb1ce6b22fbacc40b8ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/383b491b9f27144fe9b7a96c0308977fdc414552864afb1ce6b22fbacc40b8ac/userdata/shm major:0 minor:1238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b52f4ccabc096d80ff39ba947c7023e50c18db78664ec7aa1e9ea4675a4b974/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b52f4ccabc096d80ff39ba947c7023e50c18db78664ec7aa1e9ea4675a4b974/userdata/shm major:0 minor:929 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318/userdata/shm major:0 minor:174 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45197931f8b0fad8d3f78bcaed3a231713e7d574cb0f64bc503525eeb9919ca8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45197931f8b0fad8d3f78bcaed3a231713e7d574cb0f64bc503525eeb9919ca8/userdata/shm major:0 minor:491 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/489ce9d0a231fe744fe2609ac45c676f913cd59253cbd1654f71c13c5ab7ceef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/489ce9d0a231fe744fe2609ac45c676f913cd59253cbd1654f71c13c5ab7ceef/userdata/shm major:0 minor:579 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70/userdata/shm major:0 minor:935 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48d4606b470a81b62815d5eff7b40ce10241cd1db0d833c19e9920f2538a3f32/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48d4606b470a81b62815d5eff7b40ce10241cd1db0d833c19e9920f2538a3f32/userdata/shm major:0 minor:1136 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86/userdata/shm major:0 minor:412 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a4075ac7bf30cf0807cbb607815178772dc5e91f6a2b4d72d3b7f7d98bacf78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a4075ac7bf30cf0807cbb607815178772dc5e91f6a2b4d72d3b7f7d98bacf78/userdata/shm major:0 minor:1140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a9aeacf90564eae1348bcdc7f41abed1c44fe0cbc7faf0930e743893a5e4611/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a9aeacf90564eae1348bcdc7f41abed1c44fe0cbc7faf0930e743893a5e4611/userdata/shm major:0 minor:1119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43/userdata/shm major:0 minor:90 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48/userdata/shm major:0 minor:821 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/544bd972dc91af9025a1eea69f42f5c5c42aa6d851bb5566dd4ab554ab92d7e1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/544bd972dc91af9025a1eea69f42f5c5c42aa6d851bb5566dd4ab554ab92d7e1/userdata/shm major:0 minor:587 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b/userdata/shm major:0 minor:1025 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e2c5960bcaff754ff10d5f0bd77876e25896beaba961d7afb484f9be25cfe20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e2c5960bcaff754ff10d5f0bd77876e25896beaba961d7afb484f9be25cfe20/userdata/shm major:0 minor:582 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f264243f9d37a0085ae08d6a429bf7d068aa6d2f402d16789c1248a2996b55b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f264243f9d37a0085ae08d6a429bf7d068aa6d2f402d16789c1248a2996b55b/userdata/shm major:0 minor:574 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7/userdata/shm major:0 minor:564 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/61a11a661104fcf20e20292b60baae6791127267c4b1c5fced71911c81734966/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/61a11a661104fcf20e20292b60baae6791127267c4b1c5fced71911c81734966/userdata/shm major:0 minor:386 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63a61882dcf77787697d30aeb41db64cf3a3a5917a3f53104880927ba62c1424/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63a61882dcf77787697d30aeb41db64cf3a3a5917a3f53104880927ba62c1424/userdata/shm major:0 minor:1157 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7113d80392d29ba3714ca17e946cc57862288af6721d6bbfe7532c4452680bbe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7113d80392d29ba3714ca17e946cc57862288af6721d6bbfe7532c4452680bbe/userdata/shm major:0 minor:1103 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd/userdata/shm major:0 minor:395 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1/userdata/shm major:0 minor:927 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb/userdata/shm major:0 minor:631 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98/userdata/shm major:0 minor:421 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm major:0 minor:298 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344/userdata/shm major:0 minor:428 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef/userdata/shm major:0 minor:420 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8/userdata/shm major:0 minor:533 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5/userdata/shm major:0 minor:1027 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm major:0 minor:285 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a2cbe0145530499aa6f2ee8bea7d745549e79916137a2b455baf26f9bb8aca75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a2cbe0145530499aa6f2ee8bea7d745549e79916137a2b455baf26f9bb8aca75/userdata/shm major:0 minor:1031 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3/userdata/shm major:0 minor:1069 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a97067053251ed5fdadac8ab4f77e00bdc2868f3bbfa6100d974d3529e1d0acb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a97067053251ed5fdadac8ab4f77e00bdc2868f3bbfa6100d974d3529e1d0acb/userdata/shm major:0 minor:578 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a998a368841f373282c4c48f7a0c3385bacc2f3f776a934e2fcfec35d45e83ad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a998a368841f373282c4c48f7a0c3385bacc2f3f776a934e2fcfec35d45e83ad/userdata/shm major:0 minor:1283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm major:0 minor:120 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm major:0 minor:164 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1ed6c4c3d12558a0c8f33c888f0552999de0d4f4d9c1efc8cc0619df634d5b4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1ed6c4c3d12558a0c8f33c888f0552999de0d4f4d9c1efc8cc0619df634d5b4/userdata/shm major:0 minor:1251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068/userdata/shm major:0 minor:576 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126/userdata/shm major:0 minor:832 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc3fc06d095cd3d772a346e20eb25cbebb8c5a43f1aa9a2b39dd85c115bbfd06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc3fc06d095cd3d772a346e20eb25cbebb8c5a43f1aa9a2b39dd85c115bbfd06/userdata/shm major:0 minor:1093 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c20f637b2a13dfb247a3370a860f01309bff13bd9c879b2139d436b648ea6361/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c20f637b2a13dfb247a3370a860f01309bff13bd9c879b2139d436b648ea6361/userdata/shm major:0 minor:838 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c34b9543f3e2068cde8c2b7bd9a04ad41c16f834956cffb18edf070cdda1c25d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c34b9543f3e2068cde8c2b7bd9a04ad41c16f834956cffb18edf070cdda1c25d/userdata/shm major:0 minor:334 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cbe8c564562ad68c8d52a661bafedb53468d82eca60669d5f75aa1269bf0c5a6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cbe8c564562ad68c8d52a661bafedb53468d82eca60669d5f75aa1269bf0c5a6/userdata/shm major:0 minor:182 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf1ab0e9895c4d3c13750afafa4343da7c7b17306bc49f279de7d38a89a47c8d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf1ab0e9895c4d3c13750afafa4343da7c7b17306bc49f279de7d38a89a47c8d/userdata/shm major:0 minor:764 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7/userdata/shm major:0 minor:567 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d8b8861a29ec4294bd11b25781775394a6ac15d030424306c0b690edecc2b3b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d8b8861a29ec4294bd11b25781775394a6ac15d030424306c0b690edecc2b3b2/userdata/shm major:0 minor:711 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870/userdata/shm major:0 minor:423 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b/userdata/shm major:0 minor:1252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eba23b843b06a31c02fbe2e5edf93d18b7d3dc9682c0e2415a4ef18d5dc94d9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eba23b843b06a31c02fbe2e5edf93d18b7d3dc9682c0e2415a4ef18d5dc94d9a/userdata/shm major:0 minor:573 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b/userdata/shm major:0 minor:956 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd/userdata/shm major:0 minor:823 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf:{mountpoint:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~projected/kube-api-access-99z6r:{mountpoint:/var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~projected/kube-api-access-99z6r major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~projected/kube-api-access-rnq2j:{mountpoint:/var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~projected/kube-api-access-rnq2j major:0 minor:391 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:347 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x:{mountpoint:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~projected/kube-api-access-bkfcl:{mountpoint:/var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~projected/kube-api-access-bkfcl major:0 minor:385 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~secret/signing-key major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ba0c261-497c-4236-8f14-98ce5c16af59/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1ba0c261-497c-4236-8f14-98ce5c16af59/volumes/kubernetes.io~projected/kube-api-access major:0 minor:739 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~projected/kube-api-access-7rhlw:{mountpoint:/var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~projected/kube-api-access-7rhlw major:0 minor:695 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~secret/proxy-tls major:0 minor:687 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~projected/kube-api-access-h6zxf:{mountpoint:/var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~projected/kube-api-access-h6zxf major:0 minor:934 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~secret/proxy-tls major:0 minor:933 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k:{mountpoint:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~projected/kube-api-access-pn4dg:{mountpoint:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~projected/kube-api-access-pn4dg major:0 minor:1237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~projected/kube-api-access-hxsxw:{mountpoint:/var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~projected/kube-api-access-hxsxw major:0 minor:955 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:1280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~projected/kube-api-access-tmwjp:{mountpoint:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~projected/kube-api-access-tmwjp major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:544 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/webhook-cert major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk:{mountpoint:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx:{mountpoint:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~projected/kube-api-access-5kwbk:{mountpoint:/var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~projected/kube-api-access-5kwbk major:0 minor:866 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~secret/cert major:0 minor:1232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth:{mountpoint:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3fab5bbd-672c-4e18-9c1e-438e2360bc54/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3fab5bbd-672c-4e18-9c1e-438e2360bc54/volumes/kubernetes.io~projected/kube-api-access major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~projected/kube-api-access-j4z8t:{mountpoint:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~projected/kube-api-access-j4z8t major:0 minor:1133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~projected/kube-api-access-z4hzx:{mountpoint:/var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~projected/kube-api-access-z4hzx major:0 minor:932 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:931 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd49d14-d513-4f68-8a87-3cef8a033c58/volumes/kubernetes.io~projected/kube-api-access-5q4lp:{mountpoint:/var/lib/kubelet/pods/4fd49d14-d513-4f68-8a87-3cef8a033c58/volumes/kubernetes.io~projected/kube-api-access-5q4lp major:0 minor:329 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/546cf649-8e0d-4c8a-a197-412db42e36b6/volumes/kubernetes.io~projected/kube-api-access-htmbc:{mountpoint:/var/lib/kubelet/pods/546cf649-8e0d-4c8a-a197-412db42e36b6/volumes/kubernetes.io~projected/kube-api-access-htmbc major:0 minor:426 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv:{mountpoint:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~projected/kube-api-access-tc87d:{mountpoint:/var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~projected/kube-api-access-tc87d major:0 minor:925 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:1250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~projected/kube-api-access-bq48l:{mountpoint:/var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~projected/kube-api-access-bq48l major:0 minor:926 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~secret/serving-cert major:0 minor:867 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~projected/kube-api-access major:0 minor:626 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~secret/serving-cert major:0 minor:703 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67624ad2-babb-4b0e-9599-99325c286b22/volumes/kubernetes.io~projected/kube-api-access-msl9t:{mountpoint:/var/lib/kubelet/pods/67624ad2-babb-4b0e-9599-99325c286b22/volumes/kubernetes.io~projected/kube-api-access-msl9t major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq:{mountpoint:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~secret/metrics-tls major:0 minor:414 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~projected/kube-api-access-dlhnq:{mountpoint:/var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~projected/kube-api-access-dlhnq major:0 minor:394 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~secret/serving-cert major:0 minor:359 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd:{mountpoint:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd major:0 minor:133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~secret/metrics-certs major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/ca-certs major:0 minor:504 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/kube-api-access-lm2wm:{mountpoint:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/kube-api-access-lm2wm major:0 minor:505 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:431 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~projected/kube-api-access-gkz72:{mountpoint:/var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~projected/kube-api-access-gkz72 major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~projected/kube-api-access-bj9hn:{mountpoint:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~projected/kube-api-access-bj9hn major:0 minor:1023 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/default-certificate major:0 minor:1022 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1020 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/stats-auth major:0 minor:1016 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76529f4c-70b1-4fcb-ba48-ae929228f9fc/volumes/kubernetes.io~projected/kube-api-access-wfd6c:{mountpoint:/var/lib/kubelet/pods/76529f4c-70b1-4fcb-ba48-ae929228f9fc/volumes/kubernetes.io~projected/kube-api-access-wfd6c major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/tmp major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~projected/kube-api-access-5pwp5:{mountpoint:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~projected/kube-api-access-5pwp5 major:0 minor:543 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8:{mountpoint:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~projected/kube-api-access-clddw:{mountpoint:/var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~projected/kube-api-access-clddw major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~secret/proxy-tls major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~projected/kube-api-access-mk722:{mountpoint:/var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~projected/kube-api-access-mk722 major:0 minor:1166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~projected/kube-api-access-qxfd9:{mountpoint:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~projected/kube-api-access-qxfd9 major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/certs major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt:{mountpoint:/var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj:{mountpoint:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~projected/kube-api-access-qghmn:{mountpoint:/var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~projected/kube-api-access-qghmn major:0 minor:915 fsType:tmpfs blockSize:0} Feb 19 03:23:14.823788 master-0 kubenswrapper[33867]: /var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:1249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~projected/kube-api-access-jzxmv:{mountpoint:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~projected/kube-api-access-jzxmv major:0 minor:1139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/ca-certs major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/kube-api-access-9dkxh:{mountpoint:/var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/kube-api-access-9dkxh major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~projected/kube-api-access-78j6f:{mountpoint:/var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~projected/kube-api-access-78j6f major:0 minor:916 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:1115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v:{mountpoint:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:554 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx:{mountpoint:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~secret/metrics-tls major:0 minor:416 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m:{mountpoint:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k:{mountpoint:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:417 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~projected/kube-api-access-p9zww:{mountpoint:/var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~projected/kube-api-access-p9zww major:0 minor:990 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~secret/cert major:0 minor:1175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3/volumes/kubernetes.io~projected/kube-api-access-zrfgk:{mountpoint:/var/lib/kubelet/pods/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3/volumes/kubernetes.io~projected/kube-api-access-zrfgk major:0 minor:1024 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~projected/kube-api-access-rrz8r:{mountpoint:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~projected/kube-api-access-rrz8r major:0 minor:744 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/encryption-config major:0 minor:741 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/etcd-client major:0 minor:743 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/serving-cert major:0 minor:742 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~projected/kube-api-access-rhhg6:{mountpoint:/var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~projected/kube-api-access-rhhg6 major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:89 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~projected/kube-api-access-tb2v2:{mountpoint:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~projected/kube-api-access-tb2v2 major:0 minor:865 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cert major:0 minor:857 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:864 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css:{mountpoint:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/srv-cert major:0 minor:558 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq:{mountpoint:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4ed0c32-c13f-4c72-b83f-9af19b2950a3/volumes/kubernetes.io~projected/kube-api-access-rkm2l:{mountpoint:/var/lib/kubelet/pods/c4ed0c32-c13f-4c72-b83f-9af19b2950a3/volumes/kubernetes.io~projected/kube-api-access-rkm2l major:0 minor:407 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm:{mountpoint:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/srv-cert major:0 minor:555 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~projected/kube-api-access-894cz:{mountpoint:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~projected/kube-api-access-894cz major:0 minor:566 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/encryption-config major:0 minor:562 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/etcd-client major:0 minor:561 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/serving-cert major:0 minor:565 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz:{mountpoint:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz major:0 minor:66 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8f325fb-0075-4a18-ba7e-669ab19bc91a/volumes/kubernetes.io~projected/kube-api-access-jxvxh:{mountpoint:/var/lib/kubelet/pods/c8f325fb-0075-4a18-ba7e-669ab19bc91a/volumes/kubernetes.io~projected/kube-api-access-jxvxh major:0 minor:466 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca82f2e9-884e-49d1-9863-e87212d01edc/volumes/kubernetes.io~projected/kube-api-access-2btm8:{mountpoint:/var/lib/kubelet/pods/ca82f2e9-884e-49d1-9863-e87212d01edc/volumes/kubernetes.io~projected/kube-api-access-2btm8 major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp:{mountpoint:/var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd:{mountpoint:/var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dabc3c9b-ed58-4fd4-8735-65d504fa299a/volumes/kubernetes.io~projected/kube-api-access-vw2vc:{mountpoint:/var/lib/kubelet/pods/dabc3c9b-ed58-4fd4-8735-65d504fa299a/volumes/kubernetes.io~projected/kube-api-access-vw2vc major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5:{mountpoint:/var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5 major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~projected/kube-api-access-5tgff:{mountpoint:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~projected/kube-api-access-5tgff major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~projected/kube-api-access-vh4lz:{mountpoint:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~projected/kube-api-access-vh4lz major:0 minor:1132 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1131 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed2b5ced-d986-4622-9e0a-d39363629408/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/ed2b5ced-d986-4622-9e0a-d39363629408/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1021 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd:{mountpoint:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert major:0 minor:258 fsType:tmpfs blockSize:0} overlay_0-1009:{mountpoint:/var/lib/containers/storage/overlay/956a599034fb9b67d407041c22fac8896638fe3282c252c90d2b485d926d213c/merged major:0 minor:1009 fsType:overlay blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/8dcd55779507f7884fabeb812f5762d191a521501fd1231656e9819e7dc4fc02/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-1011:{mountpoint:/var/lib/containers/storage/overlay/48abd2bab0f62a734e4ff046d14d0588d43838b789e9c859447d500843de7f24/merged major:0 minor:1011 fsType:overlay blockSize:0} overlay_0-1029:{mountpoint:/var/lib/containers/storage/overlay/7f030586603fb0e23b655e2cc50a33b000f598d27ec3c31f47b436fc9acdf354/merged major:0 minor:1029 fsType:overlay blockSize:0} overlay_0-1033:{mountpoint:/var/lib/containers/storage/overlay/943b6cf3dee271b40749d178c3f84981756be29478265ca34a4189ccd3d1eff4/merged major:0 minor:1033 fsType:overlay blockSize:0} overlay_0-1035:{mountpoint:/var/lib/containers/storage/overlay/1d1398a3346fc04cb14a146d09724f55733627d554b829dfad3d8e9615a296b4/merged major:0 minor:1035 fsType:overlay blockSize:0} overlay_0-1037:{mountpoint:/var/lib/containers/storage/overlay/0440184acb3d504b69bdcc8d2485929523646d2bcc26609393c7160972ae07eb/merged major:0 minor:1037 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/391a55c64f58a9359043c652bcb289f77042ddef51db2b9737bbe52c29306672/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1042:{mountpoint:/var/lib/containers/storage/overlay/e91283c58b83a987ab33167b485beb0d892f9843b5f34c171eeae8637e9647bc/merged major:0 minor:1042 fsType:overlay blockSize:0} overlay_0-1046:{mountpoint:/var/lib/containers/storage/overlay/af3543c1e402faf11b736f31b0582b0c01ed95fe55755dc1b3bd2afc44e50302/merged major:0 minor:1046 fsType:overlay blockSize:0} overlay_0-1058:{mountpoint:/var/lib/containers/storage/overlay/4f2d86832de43122a37a3ddbe9853f4607729a2cc8892ca7830907f4f843a2be/merged major:0 minor:1058 fsType:overlay blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/d51e405c52fda80fa839e713ee4a506d190140436239aea63864a21585834dfa/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-1071:{mountpoint:/var/lib/containers/storage/overlay/f7860de27663061a4f1a7a717e36012a4079faa538bec3ce8160d753b40dccb5/merged major:0 minor:1071 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/65b62a3a953e50e6c88e468bfa47cc8aa3a92a61ccc1d4822fdb896713c716be/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/1a5f87a11979657d5dbb321ce7506e26975080f267f14426a1b0087806c80230/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1085:{mountpoint:/var/lib/containers/storage/overlay/3fccd7616d8d4fe1cfee719bb0360e0af0779339db73fc8b051387b536d7b3bc/merged major:0 minor:1085 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/333695d44fb4dea66d3838323b6bae6e6e7cb9b63c79baabfc468291ab337fbc/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/e6c9164b4126fafe56ed0ec58d4415ca63f4576aa99cdbf510551d2cab646728/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/3201db55357ee1a19fd8b08aa788fe45df136ea91f2f0ab85fbb568510f63f00/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-1105:{mountpoint:/var/lib/containers/storage/overlay/2954fe307cbcde81cd0d893e9411b9994e5d62b8a073fd6d96f3b42333b2e1ab/merged major:0 minor:1105 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/d451c63933997284a19eb33b04037e7682c52f092befea87fa361b663d28abd4/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1109:{mountpoint:/var/lib/containers/storage/overlay/045302cd9b40a0166be70f7391168a9e5bc9a2df4aa7f7a2c7c0b542be6811cc/merged major:0 minor:1109 fsType:overlay blockSize:0} overlay_0-1121:{mountpoint:/var/lib/containers/storage/overlay/0f4fc86e581fb40e7e02b7de307ad82b709b8fc9f17a2e34bc6f8f91929ff131/merged major:0 minor:1121 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/afd002649292045aae373836cc24086babb3e8cb0a11d77fa58c6c7f7bdb31c4/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/0781f86f81a20ab90780dbb6d223b84c3295e2c802430309791fa5088998da97/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/767652b538ee5e8bdd0ba6f69f9a2258bfe5bf1273f7397b706eebe2d5bba866/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-1143:{mountpoint:/var/lib/containers/storage/overlay/4c46c1f42e86cee87aafe0f7726edf0d81cb6e994b969ecaf928d3c497272d2d/merged major:0 minor:1143 fsType:overlay blockSize:0} overlay_0-1145:{mountpoint:/var/lib/containers/storage/overlay/626c61405381ec2755e7e8cb318f118a61c3897d33f25daea7e94a2638fb1c7b/merged major:0 minor:1145 fsType:overlay blockSize:0} overlay_0-1147:{mountpoint:/var/lib/containers/storage/overlay/4bd4b6747a769e192764b754d767ca101e3469c618ab04f2cb04dcdd273f9c20/merged major:0 minor:1147 fsType:overlay blockSize:0} overlay_0-1154:{mountpoint:/var/lib/containers/storage/overlay/2356eb192e4f71e3c20119dc5bf5875f1faf2cff4bd65a01fe998d9503615e72/merged major:0 minor:1154 fsType:overlay blockSize:0} overlay_0-1168:{mountpoint:/var/lib/containers/storage/overlay/c9a37367b38ea3c073b9cfdf2455212bd8bb2c0c3fc2c2dfdf443ca2cdbaa23d/merged major:0 minor:1168 fsType:overlay blockSize:0} overlay_0-1171:{mountpoint:/var/lib/containers/storage/overlay/28e5625afa0c1587e9e141e9dc1ba5f8104282cc3b2eedb1c944714c055793d2/merged major:0 minor:1171 fsType:overlay blockSize:0} overlay_0-1173:{mountpoint:/var/lib/containers/storage/overlay/a608de2c9dbaabb2864132e0fc683efd9f9c1b07a762e1b149592912b1afaa56/merged major:0 minor:1173 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/978e6c3e01019c2c8bd9be469a5264af0eb0e1443f9de24fa0a1891420bba874/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1182:{mountpoint:/var/lib/containers/storage/overlay/7b35f75b75be652374fbf43e47f764a174d36f7a5b41916df716ced072739eb4/merged major:0 minor:1182 fsType:overlay blockSize:0} overlay_0-1184:{mountpoint:/var/lib/containers/storage/overlay/1d96f78fe9c4f25c6c6031a20f702d8d3cfc7a4efed091d70994c231a77f77b3/merged major:0 minor:1184 fsType:overlay blockSize:0} overlay_0-1186:{mountpoint:/var/lib/containers/storage/overlay/007a8145d349d6d9c65b4a93295d0d949a0db64b76d7ec893b82a56dc82efef9/merged major:0 minor:1186 fsType:overlay blockSize:0} overlay_0-1190:{mountpoint:/var/lib/containers/storage/overlay/104bfb267f89abfc87f480668fb400e8e99ca89f20f6b3b7a5e9838de557d813/merged major:0 minor:1190 fsType:overlay blockSize:0} overlay_0-1192:{mountpoint:/var/lib/containers/storage/overlay/216f39a2c7acbc3ad4729a13708f298ceaf4788a89c9670e88b96289c73fe3f1/merged major:0 minor:1192 fsType:overlay blockSize:0} overlay_0-1210:{mountpoint:/var/lib/containers/storage/overlay/c45970626dc15bd0365a2cf6a15b0b059be2b07655c0df3cb1302dc528f58eff/merged major:0 minor:1210 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/ce4e22a7be216ca95bb1b5eb267971944da7643e6737292bf71a17d10e98f35e/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/5c14756094a9fcd34e518f5182be622d4358fafe9a27c0c9212fa7b950cc98cb/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-1224:{mountpoint:/var/lib/containers/storage/overlay/6a77d9011a5592bf0037803c872ee6bc2fad463060b13778554de61f7a4679d2/merged major:0 minor:1224 fsType:overlay blockSize:0} overlay_0-1228:{mountpoint:/var/lib/containers/storage/overlay/bc758ac66633fb57b91c192b3bd7ea7c716aeb80848c8b0b7ac6dc2d7eb5f3de/merged major:0 minor:1228 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/0824c3ebf5430a02c133db9bb163aa0ee4fe84044c589f00635fb98fac2f6cb1/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-1242:{mountpoint:/var/lib/containers/storage/overlay/72effe25c98e7dcbfe37025c23a5f63fa8ba1428102e48e083065e121372f603/merged major:0 minor:1242 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/b4209a3fcf20eefc9eb87b1aa62c84ad8174584b05c549e0225250de2fa2219e/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-1257:{mountpoint:/var/lib/containers/storage/overlay/748b3969dbebceff4aab82f681e4692b09cc641166824e6e074dcccc0b917973/merged major:0 minor:1257 fsType:overlay blockSize:0} overlay_0-1259:{mountpoint:/var/lib/containers/storage/overlay/8666f09d54eb32efb758a8d5459e9ad896d58136976d716ed5c164ea31f2497c/merged major:0 minor:1259 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/a22bb754ad083376920d52e7cfb71c6523cd50760666b877e7d8e5b609e766e4/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-1261:{mountpoint:/var/lib/containers/storage/overlay/c2c430bf918dfd5e0a91c3c9040e5721d976096d21db8744408924ffbb0d2cc9/merged major:0 minor:1261 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/54a255d381f6214a6c5d8dd26378bad8763c0520c38ddca9cfd4649fd4eb27b7/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1271:{mountpoint:/var/lib/containers/storage/overlay/5ae1326caa3d5685fa12ad3537bae766257c797b2b74983b0275bdb30e85eaba/merged major:0 minor:1271 fsType:overlay blockSize:0} overlay_0-1287:{mountpoint:/var/lib/containers/storage/overlay/6c2ccd46437dd0cc51243d597448c5b9a0027f9691a04f453e06e329ef00ad03/merged major:0 minor:1287 fsType:overlay blockSize:0} overlay_0-1289:{mountpoint:/var/lib/containers/storage/overlay/2753be44e8ee9163d9482e74b7702e05943bba3b32993b6442b3ba6628687079/merged major:0 minor:1289 fsType:overlay blockSize:0} overlay_0-1291:{mountpoint:/var/lib/containers/storage/overlay/04e301f38d44b5f624f884483b68003751365f296e92cca8b8b7718d1314a34a/merged major:0 minor:1291 fsType:overlay blockSize:0} overlay_0-1293:{mountpoint:/var/lib/containers/storage/overlay/121257553b8cf59a4b41f9586fa960b8b01aa7af26650f782ed9977687ea9543/merged major:0 minor:1293 fsType:overlay blockSize:0} overlay_0-1295:{mountpoint:/var/lib/containers/storage/overlay/4cfd87537f74cc525625312b4cd36f5c2e5d22e523bf9b7542a1e65b926397b0/merged major:0 minor:1295 fsType:overlay blockSize:0} overlay_0-1297:{mountpoint:/var/lib/containers/storage/overlay/1c4091242e9fcb3f396b33fd245b52c92db812c2200305dcd830836bfce62c4e/merged major:0 minor:1297 fsType:overlay blockSize:0} overlay_0-1300:{mountpoint:/var/lib/containers/storage/overlay/a5c2357285a5535a44ecaadd98eaa92d65bafa3f66f1ad719d229763ba5d7ee3/merged major:0 minor:1300 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/8bbcf8e9747e07601fac3a0d8577b6ed7d47292febe7713bd79539434f4ced4b/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1324:{mountpoint:/var/lib/containers/storage/overlay/fc15920bc9e72f5cf83a848afb410d6cc0ebfb173c5a5b09d550519e89db789c/merged major:0 minor:1324 fsType:overlay blockSize:0} overlay_0-1331:{mountpoint:/var/lib/containers/storage/overlay/68b79abe1836e0caa60da90acf769af8cf84ddbb51ffff6e1247a9c9c4e2b091/merged major:0 minor:1331 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/424f116489c83431edbedcfeb227c73c73f9e0d1802e9d31a8b70525073f031b/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/0161c8486bcf80360be5c9bef902213dee26f63bb4b1282030a2a34f3f103d1b/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/f39a2df8a371a21d08fe8e36c1a250ab97280e7f79f00dd6e561cb756a113f1d/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/a8f50e24c5a448a8e996ea5fe9835b10b5c751113d6d989006c34d69654f08ac/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/3744f306e8217d38c58f8e2c6b3ad9d021ed687465465de8b9a91964a44c3f4f/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/c3c39c03fad679de58eea0e8e9004a2b9c6993349b0f794fab42c634ec7b031a/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/27393a1f62b351a1716f03ea8e1d5489d5660cc4ca9510dfbcdd0f7696168cc2/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/1c7c0186b40fd534d46822d59bc963d3d262811bd124127e46e699ade23a213f/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/e57148e467ed26266d6e7a03aec4f08b79edb6f36460be6207eb0a1d66b7147d/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/e20a4ceecaafbc2c52109b905036c56a05efcf38eb7048fd1d3d59469cf849ff/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/dd7fa5f8104a8ade24da4f55de24f51b7ce145b31487caabf9a5f541b5dbe866/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/547b0a722317c5a7ebb20b72765471dbd064e60fafc4ed8df70f4cc1cbddaba8/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/57fd9daa22bccce96d15e2d8a7c6c647d29d1745672b159b7ac21fbb4bf6ce06/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/f67cca23c24a684ea473fe7bad1dd1dbe8cad4793bf76cddde6dee1e2e221122/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/5c59acc8bc36ed95743d4a9fc0f8eae2eef13225dadf9161b8edae6f1beea5ae/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/e1d0ba90d3cbe5db051ffa4140b4e4ff8d72842942664b936bae4c040ee62bd9/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/08632b9b2de39cb0d6c6d5b04de38fafe7d2d85af0cb5514c6f81162ab7622ba/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/802c6b1270fd0d0d60536752380f155d53c0e5dd99196b11bc876e825ed1bc94/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/71a9428dfe3ed20faf3ce8680ddddc859960c7c2da5ec527406c069288dfba89/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-209:{mountpoint:/var/lib/containers/storage/overlay/fb0540f32ec7a8e62b5b595cca457f43ab2564f41670b29fc397d9be259f27c4/merged major:0 minor:209 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/b2962973fe30936b48678cd2cf74ef628bd2ad129c1cc495776d11e7b76874e2/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/1612221f6961c6c990811dc50b4554ec30aefcb2a9833342ea61f9682ff87bcd/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-227:{mountpoint:/var/lib/containers/storage/overlay/3d533454f5a7c69767427bd68d8186c5a375fe906f52a18128c5037413fd3164/merged major:0 minor:227 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/1eec9bf698a1e4dc4171735de620ca5080aeff80cae15d022cfcb06364b45d42/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/77639de0896225d15b5b2cf472788d23eb641e9d4d7f6d00cad8c9323c3ddcab/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/7cce98422b9a31b3b93e51923970dd311e508c79195482d50db6fb13d13dc3c1/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/384c79a742d91789f396741662f60f1579fa2580a59900eb0911fe0dd9b5b443/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/a594d7c5b03b8a24089c94896c3c19d5d26e4f949089abc07229d61a031d22bd/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/27ca8881c84efa411cb0045c4a948e7cb4b319a2ac5acda856536f8573a60114/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/2afd8eda6f14788fe4612f60ad4b8ddcbc91131bde772d26dd81cb56b8196574/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/8bacb1e59aec5e75fd655f6c8009faf3ba6c76de8e00232981402235f1d9e933/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/87e5a674a40721deca15b380102ef1e2b44694f94991bbcbfd6d0a841f7fb957/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/c99743a0ebcab41173f178800f0e8e3031df46fe499698b6eefb9e8aff1349a0/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/0ca12716289d5b12ebfea77d3accd1e123bda28c8e9ff3280b1b56aca13a67df/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/93acb8635a6310c14288290ad109f1a41cc9c151eec738b22ffbb9bfe12dcb09/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/9e94a49f76447bcb7c379f9d946434bc34ffce1666acf18bae6bda545e3cdb2a/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/b941aa6c3f10a913e54aa4ea12b57b60a69b84653d6eb1d9897a0221c63ea3d4/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/73b9c492423db7f25e4b2a7d59aaa8758ac02e81c589cc8a3c115425fedc0646/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/570772c1606371631af1e0310108c542e4dc81bad91afb50360e2d59b7f0c1e8/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/3db13b2cf6547e9e0c0b65f861a85c905fdbd1599bca494e6460a219bfa3d4e3/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/c52413261effcb3d0d8a545f7dee902d995dc18aa228a41e97c4a858d6c84f27/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/3d9836d48a07582e7e9f0e1b18ea460b05e02227a9ae4f894ac2829b68476a1c/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/ec1e8661acffe6392d9410a05f7baf1ff63a2d8e2736798cd4eb0dbaee7cb226/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/cffbe037accdb53e5e7bc055f815068ea221983fc177a80694bef9dfc61d2a6f/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/b6b6938e6b1636b870bd77209fea7ddb575a3ec0f1ecf0fc02821c49f844e896/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/8bf071df25b8ba8566a08eb14dfe12d9d279af24d4181b970c769506b42b033a/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/de041a65acec6b87458ffebad8109d7377832d0fe95a6b9329761022890708a3/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/b52a08b15d680741ec8c93398dbe30f804415199aa80f2e77ca2229875596438/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/41038c54ad65fbeaee043618e21208feaed69231397eaf422862e27ca4f31604/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/3de44549af19c903717f6d1bcf8784ff6de7c770d2ad18009c0d60a9d5774d1a/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/402ca05205296b017509a196f63c7f425ab6b88a8a0be951a1cf7b6e0ebcdf61/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/5064b1d59d5ae63fee9b5d0aee22d9e2863d2dea6aaaa3c0fb9e16d32be5cb6a/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/6442e1d5a5e225508b0a3acd6a41682f779f459809f23930aeb81e6c9903eade/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/b1dec51304e2e5121992b5c4b717f747b910888456b8520172083805adcd607f/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/eed8fd2deef7e007ad861ab4d376c1cba4044838179cafdc1809796af78a60ac/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/4831e759967aefff26aa56dcfdaa2c78017305454453f3b225a18d267c30606b/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/f5b34aabff6959346cdbac99d83b97fb5c64e3a6b95138aa335113b150aa4b20/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/455e4417ae8cc063aa1b7649550941d965f92b648358b566756d5ff75f57872c/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/099e6bc8154caf1a06c24fa5f57a9e7318340c88279a43b9328972d8c5ecfed6/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/36992ae48117d4e829933a552a0b905378a5a43851ebca1f89fe44fbd948e717/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/e3cd187f2e19e87f2c1a1a6c3c98d3deff428dbf9f94151dc3bfee112505937e/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/b54bb6fa5b1090faabb3899fa74df574e113fdff6beea2483272fe69971c2284/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/bc3fe345fb4a136c3a752912e4b84da58e3703af65c07c5dacd6a956439df8be/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/43b8866090a06945e2e8f9b0e36b89565d3519e73753f0088926c35ca4ac7caa/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/e78ccf73ce0636e1a4d169aa16531815d2b01d71ff0dab6b2a0a2a7951decc67/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/ae10493b4c25b4701f736666efdd1a7dc2034b1486e1266ac42422bd0325732b/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/45a9004870f1e13e4d80e1e853fcbce3cfea73f717de478484c82f626a292817/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/2338bd322c0231ed00e22ffa307e43b730e5f067b5e97d47dd7d514a59cba36d/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/85fa42e028fcf2a5b814f98a1c33c76d0fd10f067423efe0fc5fd839ed89a8c7/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/b21d9b1146a23cd87d7ec2830dc7a6c6dcdfd98e992e72bf2a42a73d0d900cf3/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/883c1be74bbcc214ea80340c32d0f6a9e07ca07ced318966d9fdc3679b84688f/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/3ed097f12976e58f66c5872c20711a74ad689b7cabce794f780a34251bed9979/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/e1906756ee1813d2d06cb5485358d5688858de12a80e8136617ee82997599762/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/926fe6d26d6a0bd98def3f0be3796896d9a97407700216aa8712dbffe6ab9ab3/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/38ff47913b6728cc2f2b1dff8caa62b3215f82999876ef2a736bc2b672e0ea42/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/d14883f2730f9a2b54734ed766533f3b155e7a04b642e897b75eabacf13f1e03/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/243d182050c11b1d677c7177ff7dbe663cc407c9d7047f7026684de190ccbd55/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-478:{mountpoint:/var/lib/containers/storage/overlay/ff8c633fce7cea0e599e56e33ca0e9eacd0ae2fb10ecfa27035503b1a11f057e/merged major:0 minor:478 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/437432c5b10ad22fb509123ba8db691e2af2cd91dd35c117d81fbe9ac3faa177/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-481:{mountpoint:/var/lib/containers/storage/overlay/6385fcfea89e23e96a417b83a21a0afbeb569af65ddee6f5c1a8349e27323ba7/merged major:0 minor:481 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/329b7e6b797a8208d5dca51d43bc16ffffebe14f32ded101b2d029a50cde6aca/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/721df91bc23a7337e00ff52daa7e166a9c0395eecc655958f02701288570fe54/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/44a7063e861be6b4a5113d6785819526569e4f2af0e643fc7ca8ea5bcf3c55ba/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/3a3de562ff52caa50366861f7ab67133bc981948c17672bace032442cbe01ee5/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-493:{mountpoint:/var/lib/containers/storage/overlay/f870753a7e24a6dce8f9f3dc7429b19f1643ca6b3e13f63ac21fc6863000a224/merged major:0 minor:493 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/24029da1fe25ce552c0b1808a55e26dfc47860c86441304f41b76277626ef8a3/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/7b87501c7d1198884b68dbf69c3c25e85ac7b93cf4e7c0f40585c97c08ba5b78/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/e352abfc0ffa6c60a247ae295b490900ed682cb41afeb7418cf7dd276bfddf15/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/10470fb5f501caa192df1adb505b8bfd3c08f28e5f19ccfb35481e6bfec4da6a/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/0c55d135bf7fc315377f3ce1e74959b8e66882e91738d54d8807c4e9870a79da/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-517:{mountpoint:/var/lib/containers/storage/overlay/1f9160d5ae2ef58292f332adeda9edfe1f20a47fe1e5de4fe1be5a41bde773e8/merged major:0 minor:517 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/1befce5ee9bcaaf2acfc8e37f913d6894eb3218b0f7ef1fe477a7bc67cfceb1c/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-520:{mountpoint:/var/lib/containers/storage/overlay/4c817ff76be0820b048552e1aa457ecab433239f55281c35143ac4640d2db294/merged major:0 minor:520 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/f9e93c52e0a9c9448b29937c8b042eb6f092807033ab14858c182782e4acea5a/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/e71578a337892d601794e84092bf1a88fc5d6287678f6a35918a38eead0d6ec7/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/12d587b4c6767a894093e0e5c8ae01ba5d3c30b6597c9bfa98fa2f881c79d79f/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/42fabad61b23ab82a8016fc3553b2ab5bc9f5d41d1a1f8cf57b0d8be54c9ea03/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/0cebf07d372d7217d8f6d1de0072585d30d75481a71480c3d39d6912df87ed81/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/6c99f83bee005f35cf0e3db9fa980d94d112791acdfa1b6f273845535d56b2e8/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/c6bb1e2721f6deca6c8608a04db5753c31725bbb35f1e6f009c5c4832b3368b0/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/69d04d1cb5e1c67830a8f714e0e38068c5c7e687aeff4a4b342c0c37eda1213b/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/f543c7e3e0f95c9dcaa9e343d036482a1e6d2465f0e4583c34b8d9292c696471/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-569:{mountpoint:/var/lib/containers/storage/overlay/44f72163691fdec9b920537c23da42064f5b44b0c96b11281bdca89c9d2c1b66/merged major:0 minor:569 fsType:overlay blockSize:0} overlay_0-571:{mountpoint:/var/lib/containers/storage/overlay/af736f8a6191ed5d7af17a8481d100ce5d596dac452ad5fadf973987a5b9d441/merged major:0 minor:571 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/04904bcd690348878fcb9b007a80d87644013d64fa5699a3731ed73711d73a9a/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/a9e6b2ab3cd8831f6ccefa24736e968da87674d50f051ccc8bbeb2ed8e9c1122/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/e7cd4d6c5152aa08627feafa6675b6757c5e4ecc6b888d44656e9f4bc4568f66/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/41466bab5b4b028d35d92d7bb27b9957a86abd046ca29e3fec626725f3a83b84/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-600:{mountpoint:/var/lib/containers/storage/overlay/0a197ce848f48916430bb38fcd4c9ef2a34aebca835ba08124cd8bd9b207dc0d/merged major:0 minor:600 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/c8663a9a192ca5e8d8b323baa39451a9790d39c3a86051b0788e3ac92306e3d5/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-604:{mountpoint:/var/lib/containers/storage/overlay/5e3ef314b25c3c4c3e6babde246fa4bbc8a778b0446adddce2579cc0d0c1575e/merged major:0 minor:604 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/25ddf8af3a44792a4c1a3dcda4863717da6444b555ca95179078b288d452538e/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/c4fcce9f027d278fbfaba141188be1c8853b0232b59ddaf Feb 19 03:23:14.824543 master-0 kubenswrapper[33867]: ab361547b0f7bf9e3/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/d2621597a488b943261a8b7b1453c1593e35ecd93ef70fefd3921e6af11f1fc2/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/eb5ef5b72e9de3c83e24d77d72ef0afd26fdc481e7244665dbd372d03415fd89/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/d8f9c2cf5f633ced78931b09a42698f6f7c3526d67202ceaba78ffc67105edf8/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/e51cef811e1c52ecdf4c68f9e0e724946334d6c04d905fdd90e8c1dd2ebdfeb1/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/4459ae15f5f2b988b9aae5a40026c5d392745beeaea097bc515df6326b82ddfb/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/560bd735b8e07c40c8ad0af9cc726b0734466ee318df7819511202fc9aa0cbe9/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-627:{mountpoint:/var/lib/containers/storage/overlay/6a9ec64f2ee7e58c017de93c1e56761265da2ac72992d7f91bc173e597a40742/merged major:0 minor:627 fsType:overlay blockSize:0} overlay_0-634:{mountpoint:/var/lib/containers/storage/overlay/bfb7810ae0926799f137f02218bef9761197815f7a72f0f1378ad097360e876c/merged major:0 minor:634 fsType:overlay blockSize:0} overlay_0-638:{mountpoint:/var/lib/containers/storage/overlay/25aa8144b73ffa1bea768dfadccbb20b5866813b47e2a59e060abbbedc3dfe19/merged major:0 minor:638 fsType:overlay blockSize:0} overlay_0-640:{mountpoint:/var/lib/containers/storage/overlay/cc1ed7801816e3080b8413ea0908407ebd125c91e3a4fc5305446edf1393d7a5/merged major:0 minor:640 fsType:overlay blockSize:0} overlay_0-642:{mountpoint:/var/lib/containers/storage/overlay/641166fd8f317c9a42492ecf8aecf3b29544bc97665d6210d57c0f1b5c84c4c1/merged major:0 minor:642 fsType:overlay blockSize:0} overlay_0-646:{mountpoint:/var/lib/containers/storage/overlay/cfad5f930847258f642419a05a6c83d35b08ad651d105e1773c0044b45a7d845/merged major:0 minor:646 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/41076fd2752dd177972b6e7335f0ceeb39a571d59cc7006556c8a63f8003a214/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/3f42bcef126d2d756d12d35163125d4200c364a4025a68bf5fec1ebea5398b9c/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/96c5607511adc378fea22734c0f9da17f86f9e581d8a335faa7a759b6c2feb3d/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/4614a03ff50729e2056ffc1bfc7e67e67a003bc5d8da29dfb0eff7922ac215b7/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/7b8e5fd8da2a599211638f68791bc21fe9c05c38d52344b0520ccb9db1376d21/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/e158c227fedb5b9834a5343684be66ce854fd69fd94d1c9dc18ad6008ef92305/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/e9846b02c4f708841cf3a14cde305556a1820d658bc0ae10632e2a83846001e7/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/39d985535e1b5bab1f4d089e104d6c3ea5a1ba1791a9c6081c4d4db8cc1d8150/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/162bef320c61f0433a859c657b29fbf02a267a7e4361af3e2ce9e492881fab3f/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/f654f26f5c9643d081f7cbdb8b53bdd7322dea3417e74a13a9d0b3b198e3cf40/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/5eae5fcdc72085dc232ca8ec0bbce6108e9bd605e185b2164fd0d0991cf42d4b/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-693:{mountpoint:/var/lib/containers/storage/overlay/44605292b60ff55171d61b2625abe65fc28637269bcdf6e59b436604286f2c82/merged major:0 minor:693 fsType:overlay blockSize:0} overlay_0-696:{mountpoint:/var/lib/containers/storage/overlay/f8c6821c3bd60ad27b377a2d277ae777f936907b87b5ae00ebbb938e0ea1ca8f/merged major:0 minor:696 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d306ea2cb8f1e11e5c3b322ea22ba55904aa11983d5ed2e7679da8be82605b78/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/1ad506d3fd63082b7a3052986da46f619334e580ec8321946b5434b49f9de896/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-702:{mountpoint:/var/lib/containers/storage/overlay/5191fb96c969dd7b8e4c894252b19c74bd801dda5248d1ec29e1a7fd565c815d/merged major:0 minor:702 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/553b09f2d6b6933c5e68a7db55b6b9e561a85bd6329a86b2856a23d2cab6b13c/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/4dd7ad276be271a2a6439f81a045cbdb6602489678323eebb909009b93ec792c/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/6ade49cbb561ed44d9d58a93c8709479bc49c828ad54863fba9c3adbc1765df1/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/97c356120e47d6a05edb1c8b7be4fe06d50f52a38632174fd8913920fe3286fc/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-725:{mountpoint:/var/lib/containers/storage/overlay/fc50af0760586c16d55f04316ac0aeb7f10284f48020fc720769834a5160384d/merged major:0 minor:725 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/ac7c192405dec8a222d968fa52a8de79186dc80473162ee1d4b0f418621cde4f/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-735:{mountpoint:/var/lib/containers/storage/overlay/954cdf411e6df39977eec76e0fd393072827901163dafbf80f79513f31b25241/merged major:0 minor:735 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/ea73b97ce3bcade4449ce0aeaa0bfa83d097edb3d93b54de70fb4cbfe930e49a/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/1a6c39ab6b5185504f37bdaf05cc84e7de929d96bb32704a75d3eda77f8dd1ef/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/68acc2134b0a587f0ff74eb40b13af56e7d9d0a72eab6610da5ac68f7a171025/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-760:{mountpoint:/var/lib/containers/storage/overlay/29740968b6a350c61b3a05fb991cc944ef581e1a29e3513825ffbabe40384c0e/merged major:0 minor:760 fsType:overlay blockSize:0} overlay_0-761:{mountpoint:/var/lib/containers/storage/overlay/58dfa0bd007c7ed03741e8687080450246e4a17bfe0366a51d188b0631fcd1d0/merged major:0 minor:761 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/8b5252ff2224171585fbe2af05787dd184484aaddbff5e11117cda1bb7d3eed1/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/48df8b6920f1aece8f2716a053544f9905be15ff7a22c34c644834bf42a98f29/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-779:{mountpoint:/var/lib/containers/storage/overlay/fe592dd7a76ac0a1b03052c70b8aa5ab5debf736095d034c392bd618300b8461/merged major:0 minor:779 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/30a01832ba8392cfb5408692c2eb175842208742daf6f97a819db4d77b88db68/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/7b1d2bf8dac4cac1315ef68733cbbbda827f6b2f5d0c9f68c2ea42cec5e36d51/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-792:{mountpoint:/var/lib/containers/storage/overlay/be45a1b049dc9c5a7b8409d26c6c6369a8c045dfa96ca25d22292e5289233cff/merged major:0 minor:792 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/6d2630dba199bc7161081ade703bc3032c457034380ed9ce6ef4d8833362b453/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/95941604a4f2cd2218b57a8b6401943cf0a639fd20d68a9e25af681d4c44d6d9/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-798:{mountpoint:/var/lib/containers/storage/overlay/5450dfcba81046f9628eab878ff765cc7a37af294658016d4ac2e853f45b052b/merged major:0 minor:798 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/84470b8cf2b2b7d5bed81c8c89328a78d8dd253c3c3c8c11b20405bf5028525a/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/795b8149eea5821e8b945cc47d7ce0bf21abbf6915e00da512738c90541fc4e1/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-802:{mountpoint:/var/lib/containers/storage/overlay/31ab9e1b67dee20b2d111d6837b2ee7a67cffcc2c00551fccb3e9a713ac8c1b8/merged major:0 minor:802 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/db4ec3a9eaa598476ad43663b3e9042e65c12923cc77ea325772634d55efcb92/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/4cdc46a52b24b5e6668b4a4a54812adcf2db6affb51fe69f5c3d201eb19d0b03/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-825:{mountpoint:/var/lib/containers/storage/overlay/26794907951d2a48befd606fa00bccd7145ababb844282f71182fa76b2ddfe67/merged major:0 minor:825 fsType:overlay blockSize:0} overlay_0-828:{mountpoint:/var/lib/containers/storage/overlay/f5d96cc79e84537e4b1787835be6d4a0d55ce364844c232f3cb9c28b14938ace/merged major:0 minor:828 fsType:overlay blockSize:0} overlay_0-829:{mountpoint:/var/lib/containers/storage/overlay/6f15e004f31512c44c85892ad46abfd7da531a198ca20130ab04a44286eae726/merged major:0 minor:829 fsType:overlay blockSize:0} overlay_0-835:{mountpoint:/var/lib/containers/storage/overlay/3b57366bcbbdd1b819c24f03c45bf0d6961513079a24592bbf0558d4ec559d86/merged major:0 minor:835 fsType:overlay blockSize:0} overlay_0-837:{mountpoint:/var/lib/containers/storage/overlay/43c38c9ae086ac8c6d0a50cae18b7ad5e32cd44071f9054accfe315a521173ac/merged major:0 minor:837 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/40f54fb575e1bf560f484b1d7df561798ac020734e6c15f0b02ad71c8e372fec/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/8c28167a978e5dfc7d20c42865751181912b4ccfc9cbdcd70118d0ddfb06dcf9/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/913e2b7231623e55ee76010d33e0a21f4fc33e1774f96bfcda022c5a035492a7/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-849:{mountpoint:/var/lib/containers/storage/overlay/e1e5d03f9063428bc88e9c2cc516a9182ab344aeb286c666203a9e973a009548/merged major:0 minor:849 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/120eebbbe9db09ce9424cb01f3d719f2ba514fd2fa1b4f6adfdcf9a5453dcbe9/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/92ec9e2f2d99df7deb43effe4cb5f0bf455411bd16a54e123b3b5fcdd36a87d0/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/2b6d073249d5333f41111b854997e1e57dfdd73d44e96267ade6c1d9321e5207/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/f4d588e3b5b73fea775a68b5b0e5109089372be92caddf00601205ccf7547b3d/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/d9953b0d66d5be4e4b62401538386ab431b68cbe31e26a28ed1998399fb13057/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/74cc13db931febb1fcdb6ccd147d414d257c37f5362a8c688400a97ab52a116d/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-894:{mountpoint:/var/lib/containers/storage/overlay/4833b05dbe940dca4c39b88e4e916cfebc2ef704ef7b0da3d6842c705d9286ec/merged major:0 minor:894 fsType:overlay blockSize:0} overlay_0-896:{mountpoint:/var/lib/containers/storage/overlay/0e248d8312ce822b8d840616a269847f6728d283fe14080bbd574f1c5515b91e/merged major:0 minor:896 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/1654862246a2c188854204b12444898e0ad05ca147f848781d1ff0be79f2a91b/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-898:{mountpoint:/var/lib/containers/storage/overlay/25a7a87747c636b8132b73ad27ba8b7f97af3b55df56dcd61378a7c42d9d0aeb/merged major:0 minor:898 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/6ef2b18b15e91b8dcc4fad24eb7529457301c51f42d7ea515b0b8be22e0034f2/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/5cf8e765b0178a1ca996746e4fe2b1f01f563e79d9ec84655f3e791ed6111c78/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/ae5a68763ce9ec46253260846fb13f5a4d62b7e7c4b1b70cb0c8c595e39541bc/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-939:{mountpoint:/var/lib/containers/storage/overlay/829ff81337583bcd485ac086cd59fac7119677312ff6246de0dcfbc6dbd61dc2/merged major:0 minor:939 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/70dae39325cf5ad8370f08d0c174e49a4d252594cd1805df6fb03babd101501e/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-943:{mountpoint:/var/lib/containers/storage/overlay/81a02bf5e97aad1aab3edb08561bedd6d8b340580715bcd946b4848ba5364f11/merged major:0 minor:943 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/4f5d76fddb4637b17123f1e57411b98e18103621d36a605886bd9e2c157f8316/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-947:{mountpoint:/var/lib/containers/storage/overlay/794847dcf0a2ccf815b3133e8f7ed38ef845e6e13f2e8418761c51a674b608f7/merged major:0 minor:947 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/b1a60a92316f39cb13122a0dd36266c53cb181433b85b94844e57ffeb1510213/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-958:{mountpoint:/var/lib/containers/storage/overlay/e634442c6ac6e15743fe6972f461f665d84f081589ee8918f21c1381c592bb0a/merged major:0 minor:958 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/35dac259303192ec93e1e2fb5ac8003bba8d0addff9c591700430bc92d00f5c5/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/3878198712bfb37aa32069916fd47a30befe28ccb85594a55849341bc59ca3a1/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-968:{mountpoint:/var/lib/containers/storage/overlay/300ab4e7ab9a91b4c25f7ef61a5199c7520b64a87d5e020605691acb2a084ad2/merged major:0 minor:968 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/6dc5251161bfa92666263f79f834da5b5f78367057b6878dc3b323ee6704e4e6/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/37c7fd2e7eb33762e9003752554ff97da37c30f8650e34494ce0121b99169feb/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/47f89b141d477895ac44db1e92287dd4de446de165a0b8fa44255582ba911265/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/efd68397874a424c309f1c02208b19c69d3536ac647ca1c4a904589d122aee75/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-980:{mountpoint:/var/lib/containers/storage/overlay/9ad3ca4678ed9ae7408866bd7e3cbc0d170260627deb76f7b45c473c17fcbaea/merged major:0 minor:980 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/89cf8fa8225b4a5e97a629f6d0e3bc3d901c4c7854877e1d3b1c88d7cf775fa0/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-991:{mountpoint:/var/lib/containers/storage/overlay/acc397ab263d66e89734b2c8d489315f935074f79323d816c1fa8ca0da8078c7/merged major:0 minor:991 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/2142a3ae91b44efdb880dce952a29ce3bca1ab2cd0de2d2920b75afb629638d7/merged major:0 minor:992 fsType:overlay blockSize:0}] Feb 19 03:23:14.870780 master-0 kubenswrapper[33867]: I0219 03:23:14.868451 33867 manager.go:217] Machine: {Timestamp:2026-02-19 03:23:14.867125726 +0000 UTC m=+0.163796377 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4d28ab4c6c14d45b3b826d1d7d6a246 SystemUUID:e4d28ab4-c6c1-4d45-b3b8-26d1d7d6a246 BootID:81756ef7-a125-45a3-9659-4adc79f47dc0 Filesystems:[{Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:753 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1009 DeviceMajor:0 DeviceMinor:1009 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1125 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:743 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1033 DeviceMajor:0 DeviceMinor:1033 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1105 DeviceMajor:0 DeviceMinor:1105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:1115 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e2c5960bcaff754ff10d5f0bd77876e25896beaba961d7afb484f9be25cfe20/userdata/shm DeviceMajor:0 DeviceMinor:582 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7/userdata/shm DeviceMajor:0 DeviceMinor:564 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/kube-api-access-cpdqx DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~projected/kube-api-access-k6j8c DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:565 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-634 DeviceMajor:0 DeviceMinor:634 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1022 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~projected/kube-api-access-bj9hn DeviceMajor:0 DeviceMinor:1023 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1184 DeviceMajor:0 DeviceMinor:1184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~projected/kube-api-access-pn4dg DeviceMajor:0 DeviceMinor:1237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e00ccb287dd8b9291c3306328c5788a23d37066197f78308e926a653d3929ef/userdata/shm DeviceMajor:0 DeviceMinor:420 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~projected/kube-api-access-2dlvj DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~projected/kube-api-access-dlhnq DeviceMajor:0 DeviceMinor:394 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a4075ac7bf30cf0807cbb607815178772dc5e91f6a2b4d72d3b7f7d98bacf78/userdata/shm DeviceMajor:0 DeviceMinor:1140 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~projected/kube-api-access-vjwbx DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c4ed0c32-c13f-4c72-b83f-9af19b2950a3/volumes/kubernetes.io~projected/kube-api-access-rkm2l DeviceMajor:0 DeviceMinor:407 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bcf44075958c0ed97fdf56576e694d0a80dc968641ca6c609aa09a703fa5b8a/userdata/shm DeviceMajor:0 DeviceMinor:140 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~projected/kube-api-access-8p8qd DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7e8e2788d3f71b91ae59e0572e5bd8a6d561d26dc7f9a0c7368468679564cddb/userdata/shm DeviceMajor:0 DeviceMinor:631 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1228 DeviceMajor:0 DeviceMinor:1228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1085 DeviceMajor:0 DeviceMinor:1085 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1020 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a998a368841f373282c4c48f7a0c3385bacc2f3f776a934e2fcfec35d45e83ad/userdata/shm DeviceMajor:0 DeviceMinor:1283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1661a18dd33340919d8a88e5f91b59d5c684dbe01a019f25562e9696f9314f09/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1210 DeviceMajor:0 DeviceMinor:1210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:626 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-894 DeviceMajor:0 DeviceMinor:894 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-696 DeviceMajor:0 DeviceMinor:696 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1173 DeviceMajor:0 DeviceMinor:1173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~projected/kube-api-access-crz8x DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91f1c7bcd88e0a3be2b4b31028823b921a4268810f70c73edd3e94760f9af545/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~projected/kube-api-access-bkfcl DeviceMajor:0 DeviceMinor:385 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:554 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:347 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8/userdata/shm DeviceMajor:0 DeviceMinor:533 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1271 DeviceMajor:0 DeviceMinor:1271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:561 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-837 DeviceMajor:0 DeviceMinor:837 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-958 DeviceMajor:0 DeviceMinor:958 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62011c22e1ac970c8b8da7b0bdd419d5d816510d4051805a82fcedbbc65b8c3c/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-802 DeviceMajor:0 DeviceMinor:802 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3fab5bbd-672c-4e18-9c1e-438e2360bc54/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:489 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-600 DeviceMajor:0 DeviceMinor:600 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e210c3c8004e773a0bdb2dc099fdf8b85ea7ff84b49ad9f3a84bc8f3cd8ea30/userdata/shm DeviceMajor:0 DeviceMinor:1254 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:238 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18b29e37-cda9-41a8-a910-3d8f74be3cf3/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:380 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-481 DeviceMajor:0 DeviceMinor:481 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-702 DeviceMajor:0 DeviceMinor:702 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/270ee55e27188738f11e238739f68e6ee4947520aca0c90df01eaa05dc4ab81c/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1098 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:773 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/92804daf-1fd0-4008-afff-4f9bc362990b/volumes/kubernetes.io~projected/kube-api-access-78j6f DeviceMajor:0 DeviceMinor:916 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:416 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~projected/kube-api-access-894cz DeviceMajor:0 DeviceMinor:566 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc3fc06d095cd3d772a346e20eb25cbebb8c5a43f1aa9a2b39dd85c115bbfd06/userdata/shm DeviceMajor:0 DeviceMinor:1093 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/61a11a661104fcf20e20292b60baae6791127267c4b1c5fced71911c81734966/userdata/shm DeviceMajor:0 DeviceMinor:386 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~projected/kube-api-access-tb2v2 DeviceMajor:0 DeviceMinor:865 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1147 DeviceMajor:0 DeviceMinor:1147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-640 DeviceMajor:0 DeviceMinor:640 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c820d0ae9471b6671d41e47749616c410e4703c6cd54cc32cf06336c4e2c81b/userdata/shm DeviceMajor:0 DeviceMinor:1025 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/546cf649-8e0d-4c8a-a197-412db42e36b6/volumes/kubernetes.io~projected/kube-api-access-htmbc DeviceMajor:0 DeviceMinor:426 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9581d1c5f8271fb515c6059b20bafd4d644e9f547a789be9ede7138665e2db3/userdata/shm DeviceMajor:0 DeviceMinor:1069 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1190 DeviceMajor:0 DeviceMinor:1190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~projected/kube-api-access-5pwp5 DeviceMajor:0 DeviceMinor:543 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:471 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:555 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-896 DeviceMajor:0 DeviceMinor:896 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~projected/kube-api-access-gbffz DeviceMajor:0 DeviceMinor:66 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-646 DeviceMajor:0 DeviceMinor:646 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1182 DeviceMajor:0 DeviceMinor:1182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~projected/kube-api-access-7n9vm DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:933 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-642 DeviceMajor:0 DeviceMinor:642 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cbe8c564562ad68c8d52a661bafedb53468d82eca60669d5f75aa1269bf0c5a6/userdata/shm DeviceMajor:0 DeviceMinor:182 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes/kubernetes.io~projected/kube-api-access-rnq2j DeviceMajor:0 DeviceMinor:391 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~projected/kube-api-access-jzxmv DeviceMajor:0 DeviceMinor:1139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b137033-0db2-46c9-a526-f8234345e883/volumes/kubernetes.io~projected/kube-api-access-clddw DeviceMajor:0 DeviceMinor:774 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48d1ac933722c354749db6ab6a42199918879d26d241d24eef57eac8e0adbd70/userdata/shm DeviceMajor:0 DeviceMinor:935 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/215b1ea5727b014cfc6dc502ee238518328ed6ffbcea54f35ba8164d0dcfcada/userdata/shm DeviceMajor:0 DeviceMinor:937 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-947 DeviceMajor:0 DeviceMinor:947 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bf12b7aaff989dde65f3016c4b888d0b3e38d175867b33d7c6f63dd79bf7d2c/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7d96d2b840dcb05cea8fd6a137b484ba6109d3fc00e9d95d9aeb1de00554068/userdata/shm DeviceMajor:0 DeviceMinor:576 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a97067053251ed5fdadac8ab4f77e00bdc2868f3bbfa6100d974d3529e1d0acb/userdata/shm DeviceMajor:0 DeviceMinor:578 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1143 DeviceMajor:0 DeviceMinor:1143 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-829 DeviceMajor:0 DeviceMinor:829 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~projected/kube-api-access-8cm45 DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-638 DeviceMajor:0 DeviceMinor:638 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1059 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1291 DeviceMajor:0 DeviceMinor:1291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98ac5423-b231-44e5-9545-424d635ed6ee/volumes/kubernetes.io~projected/kube-api-access-bq27v DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd/userdata/shm DeviceMajor:0 DeviceMinor:395 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1145 DeviceMajor:0 DeviceMinor:1145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1a4a1b2ee116e9b33918fc922709316e70b8330853b6fcb741a4accb5e6b8be/userdata/shm DeviceMajor:0 DeviceMinor:164 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~projected/kube-api-access-grhdv DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/decd8c56-e0f0-4119-917f-56652c8f8372/volumes/kubernetes.io~projected/kube-api-access-8tqm5 DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1071 DeviceMajor:0 DeviceMinor:1071 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1171 DeviceMajor:0 DeviceMinor:1171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a28c1fb386c96884c0fa554c8dd9df374181814fab6413b91a2304727463f391/userdata/shm DeviceMajor:0 DeviceMinor:285 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~projected/kube-api-access-mk722 DeviceMajor:0 DeviceMinor:1166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8f325fb-0075-4a18-ba7e-669ab19bc91a/volumes/kubernetes.io~projected/kube-api-access-jxvxh DeviceMajor:0 DeviceMinor:466 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1131 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1289 DeviceMajor:0 DeviceMinor:1289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dabc3c9b-ed58-4fd4-8735-65d504fa299a/volumes/kubernetes.io~projected/kube-api-access-vw2vc DeviceMajor:0 DeviceMinor:815 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1129 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1300 DeviceMajor:0 DeviceMinor:1300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3edc7410-417a-4e55-9276-ac271fd52297/volumes/kubernetes.io~projected/kube-api-access-vzpth DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-735 DeviceMajor:0 DeviceMinor:735 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1324 DeviceMajor:0 DeviceMinor:1324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7be6f9b5-fe27-4df5-b933-63bbb12f680c/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1156 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1192 DeviceMajor:0 DeviceMinor:1192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-569 DeviceMajor:0 DeviceMinor:569 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-943 DeviceMajor:0 DeviceMinor:943 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7/volumes/kubernetes.io~projected/kube-api-access-r5wsp DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:857 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a7a2b85bd49039ea82202ec9093218400fe6ba37620dacb89cb656ef0f6f1e1/userdata/shm DeviceMajor:0 DeviceMinor:927 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45197931f8b0fad8d3f78bcaed3a231713e7d574cb0f64bc503525eeb9919ca8/userdata/shm DeviceMajor:0 DeviceMinor:491 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1109 DeviceMajor:0 DeviceMinor:1109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:472 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1175 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43/userdata/shm DeviceMajor:0 DeviceMinor:90 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:763 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:431 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1082261815c7e19c2e96bf70a147ae8ad719192a52e2b659efb185314dc947a8/userdata/shm DeviceMajor:0 DeviceMinor:457 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:1249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318/userdata/shm DeviceMajor:0 DeviceMinor:174 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:563 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d8b8861a29ec4294bd11b25781775394a6ac15d030424306c0b690edecc2b3b2/userdata/shm DeviceMajor:0 DeviceMinor:711 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b52f4ccabc096d80ff39ba947c7023e50c18db78664ec7aa1e9ea4675a4b974/userdata/shm DeviceMajor:0 DeviceMinor:929 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1058 DeviceMajor:0 DeviceMinor:1058 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~projected/kube-api-access-rn9d8 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f366572292d05f4ad2d57a2dd6026d019460bb016409712b7a89b5deefa6fc1b/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-571 DeviceMajor:0 DeviceMinor:571 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:89 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1295 DeviceMajor:0 DeviceMinor:1295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48d4606b470a81b62815d5eff7b40ce10241cd1db0d833c19e9920f2538a3f32/userdata/shm DeviceMajor:0 DeviceMinor:1136 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~projected/kube-api-access-rrz8r DeviceMajor:0 DeviceMinor:744 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-980 DeviceMajor:0 DeviceMinor:980 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bcb98d1b68dc897f73c1a855233e9b02c59d6a1d42e70e57ef6fecb191978ff/userdata/shm DeviceMajor:0 DeviceMinor:754 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1046 DeviceMajor:0 DeviceMinor:1046 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~projected/kube-api-access-qxfd9 DeviceMajor:0 DeviceMinor:1068 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-604 DeviceMajor:0 DeviceMinor:604 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-828 DeviceMajor:0 DeviceMinor:828 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~projected/kube-api-access-5tgff DeviceMajor:0 DeviceMinor:1079 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a52be87c-e707-4269-96da-537708d52b64/volumes/kubernetes.io~projected/kube-api-access-kv24m DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7113d80392d29ba3714ca17e946cc57862288af6721d6bbfe7532c4452680bbe/userdata/shm DeviceMajor:0 DeviceMinor:1103 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd49d14-d513-4f68-8a87-3cef8a033c58/volumes/kubernetes.io~projected/kube-api-access-5q4lp DeviceMajor:0 DeviceMinor:329 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7201246ec91870addf10a9f35436bf3abda03d1a2eefd6894425648ac015fdbf/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-627 DeviceMajor:0 DeviceMinor:627 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9fccc7356f4c0fc6ca6003f16e1a3945d087e393bfff22e084766d407a7387c5/userdata/shm DeviceMajor:0 DeviceMinor:1027 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1042 DeviceMajor:0 DeviceMinor:1042 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e81865-21fa-4e35-a870-738c13ac5b70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1075 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f264243f9d37a0085ae08d6a429bf7d068aa6d2f402d16789c1248a2996b55b/userdata/shm DeviceMajor:0 DeviceMinor:574 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40c5200e9b9335dc4fde8e4b8c2702394db4fe9784008c565be0de314808268d/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ed2b5ced-d986-4622-9e0a-d39363629408/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1021 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eba23b843b06a31c02fbe2e5edf93d18b7d3dc9682c0e2415a4ef18d5dc94d9a/userdata/shm DeviceMajor:0 DeviceMinor:573 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/288c3a57623280dd907a240618bbdd493e84db9c6fc6a9b8ebbd7c2959445df1/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e6d01c66ad4ba09830602801e48d0eb21df8043e491a9222312021d0c71dccd/userdata/shm DeviceMajor:0 DeviceMinor:1165 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1232 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81ed4699f10fea30224a5472efb9432589611c0502019a2f9ffb24815fcdafb9/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c569676a-51dd-418c-87a5-719c18fe4c95/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:562 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1760667bc1ae6e6c0373f38881f9d459051273b2be065a4f5aefaa03ffb1434b/userdata/shm DeviceMajor:0 DeviceMinor:586 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/61abb34a-08f0-4438-9a89-c712b2048878/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:703 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a2cbe0145530499aa6f2ee8bea7d745549e79916137a2b455baf26f9bb8aca75/userdata/shm DeviceMajor:0 DeviceMinor:1031 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1035 DeviceMajor:0 DeviceMinor:1035 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1154 DeviceMajor:0 DeviceMinor:1154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4714ef51-2d24-4938-8c58-80c1485a368b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c18b07966702439a57f42490f57b89c995ec81c7db0d363c2168675a894d498/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:867 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1011 DeviceMajor:0 DeviceMinor:1011 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec677f3d-06c4-4cf4-9f24-69894b9a9118/volumes/kubernetes.io~projected/kube-api-access-vh4lz DeviceMajor:0 DeviceMinor:1132 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1be6fbce0be2d2a600566ad7a089efc0d76906ae49f8bc93720c22ae930e1161/userdata/shm DeviceMajor:0 DeviceMinor:1180 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1259 DeviceMajor:0 DeviceMinor:1259 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/489ce9d0a231fe744fe2609ac45c676f913cd59253cbd1654f71c13c5ab7ceef/userdata/shm DeviceMajor:0 DeviceMinor:579 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af5828ea-090f-4c8f-90e6-c4e405e69ec5/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:864 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1121 DeviceMajor:0 DeviceMinor:1121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vf Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: s Inodes:104594880 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/858a717b-a44e-4b8d-9974-7451a89cf104/volumes/kubernetes.io~projected/kube-api-access-qghmn DeviceMajor:0 DeviceMinor:915 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~projected/kube-api-access-76css DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a676c43c-4e0a-4826-86c1-288260611b09/volumes/kubernetes.io~projected/kube-api-access-p9zww DeviceMajor:0 DeviceMinor:990 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-725 DeviceMajor:0 DeviceMinor:725 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:417 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1242 DeviceMajor:0 DeviceMinor:1242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1224 DeviceMajor:0 DeviceMinor:1224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1261 DeviceMajor:0 DeviceMinor:1261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63a61882dcf77787697d30aeb41db64cf3a3a5917a3f53104880927ba62c1424/userdata/shm DeviceMajor:0 DeviceMinor:1157 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e2878c5bde889c9b5090839b4189995b59bf2a7eaa7045a344bf1f8020b8727b/userdata/shm DeviceMajor:0 DeviceMinor:1252 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:1250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~projected/kube-api-access-46zzd DeviceMajor:0 DeviceMinor:133 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/87e7bba244435f8f2d510f4160bfbce671f2f502e5bbb65c6fef9f33ed868be9/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:504 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7012676e-f35d-46e5-83e8-a63172dd076e/volumes/kubernetes.io~projected/kube-api-access-lm2wm DeviceMajor:0 DeviceMinor:505 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37b14f21eea6ae068c6ab319848a3075fde8aacf4bdcecd0e6ca1c48ebc11e9a/userdata/shm DeviceMajor:0 DeviceMinor:475 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ca82f2e9-884e-49d1-9863-e87212d01edc/volumes/kubernetes.io~projected/kube-api-access-2btm8 DeviceMajor:0 DeviceMinor:756 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/383b491b9f27144fe9b7a96c0308977fdc414552864afb1ce6b22fbacc40b8ac/userdata/shm DeviceMajor:0 DeviceMinor:1238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-968 DeviceMajor:0 DeviceMinor:968 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:415 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/544bd972dc91af9025a1eea69f42f5c5c42aa6d851bb5566dd4ab554ab92d7e1/userdata/shm DeviceMajor:0 DeviceMinor:587 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ff96ce8-6427-4a42-afa6-8b8bc778f094/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-798 DeviceMajor:0 DeviceMinor:798 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-991 DeviceMajor:0 DeviceMinor:991 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/499dfae4e38579ddc7dbe458f0d782fd925c68bc3e1e204ec2926928e4d6fb86/userdata/shm DeviceMajor:0 DeviceMinor:412 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:418 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da07760d7571f3892e97b1fc3d10821bdf692b5194a6d30a2c724a9ebebef870/userdata/shm DeviceMajor:0 DeviceMinor:423 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:931 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5305f7e6ea5f104f1b4e810f1ceec9db5f5fd632e430c871c365b093c1832c48/userdata/shm DeviceMajor:0 DeviceMinor:821 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:1280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1297 DeviceMajor:0 DeviceMinor:1297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~projected/kube-api-access-wqsbq DeviceMajor:0 DeviceMinor:283 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf1ab0e9895c4d3c13750afafa4343da7c7b17306bc49f279de7d38a89a47c8d/userdata/shm DeviceMajor:0 DeviceMinor:764 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-939 DeviceMajor:0 DeviceMinor:939 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba26fc62b4c67c05d10c1181444ae82a957f739cc50fff1b515c7ee8cf0d6126/userdata/shm DeviceMajor:0 DeviceMinor:832 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/adefbbde4867112d23ee79a46cdbf443364c4401d65d3a59d065817251804bf8/userdata/shm DeviceMajor:0 DeviceMinor:120 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/volumes/kubernetes.io~projected/kube-api-access-dhmpd DeviceMajor:0 DeviceMinor:284 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-517 DeviceMajor:0 DeviceMinor:517 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f203fd813bb9fb33eb11a0b15b04ff2b9379aba784360def5e2df17965add9cd/userdata/shm DeviceMajor:0 DeviceMinor:823 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1287 DeviceMajor:0 DeviceMinor:1287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3/volumes/kubernetes.io~projected/kube-api-access-zrfgk DeviceMajor:0 DeviceMinor:1024 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f207fe64bef8b420052896b2bfb189ccc2b431030abfa5bd7579048d3c21b98/userdata/shm DeviceMajor:0 DeviceMinor:421 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/98805e3ec9d2d2f3839c03ed948de103105a5f1210afc18e423fd6e7cba8b344/userdata/shm DeviceMajor:0 DeviceMinor:428 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d896e197c19c3e11f13f6c1320c71d5019f5e0db2f0e2d3534740ed3aaee68c7/userdata/shm DeviceMajor:0 DeviceMinor:567 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f7d8fc8-c313-416f-b62b-b54db9944066/volumes/kubernetes.io~projected/kube-api-access-9dkxh DeviceMajor:0 DeviceMinor:507 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/acb5de46f3e25ef76d6a8af08f2a213b03e16ebf52f46ac28fa38e4361f6b5d6/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1037 DeviceMajor:0 DeviceMinor:1037 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-693 DeviceMajor:0 DeviceMinor:693 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:544 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-760 DeviceMajor:0 DeviceMinor:760 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1ed6c4c3d12558a0c8f33c888f0552999de0d4f4d9c1efc8cc0619df634d5b4/userdata/shm DeviceMajor:0 DeviceMinor:1251 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1293 DeviceMajor:0 DeviceMinor:1293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:687 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af2be4f9-f632-4a72-8f39-c96954403edc/volumes/kubernetes.io~projected/kube-api-access-rhhg6 DeviceMajor:0 DeviceMinor:366 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1186 DeviceMajor:0 DeviceMinor:1186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bfb8eb142f502ea7593a0533e3254ede9b8f9f56754df54ad25f7a0adb710480/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ed8577f4b5f593fdd1508aeb09fd5534fb09a47c902e95af8327061b1713177b/userdata/shm DeviceMajor:0 DeviceMinor:956 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/58c6f5a2-c0a8-4636-a057-cedbe0151579/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:556 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/67624ad2-babb-4b0e-9599-99325c286b22/volumes/kubernetes.io~projected/kube-api-access-msl9t DeviceMajor:0 DeviceMinor:559 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/78702d1c-b5ab-4e00-92da-cb2513a72024/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:535 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ae2cbe0-aa0a-4f26-994b-660fb962d995/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:560 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-898 DeviceMajor:0 DeviceMinor:898 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8fedd22b9da118be6af452faa704499daf6539b968c5fd646de69afe85423626/userdata/shm DeviceMajor:0 DeviceMinor:298 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-520 DeviceMajor:0 DeviceMinor:520 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33bb562f-84e7-4fcb-b008-416c09a5ecf0/volumes/kubernetes.io~projected/kube-api-access-5kwbk DeviceMajor:0 DeviceMinor:866 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59cea4cb-6374-49b6-97b3-d8a19cc1860f/volumes/kubernetes.io~projected/kube-api-access-tc87d DeviceMajor:0 DeviceMinor:925 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1331 DeviceMajor:0 DeviceMinor:1331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ba0c261-497c-4236-8f14-98ce5c16af59/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:739 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0664d88f-f697-4182-93cd-f208ff6f3ac2/volumes/kubernetes.io~projected/kube-api-access-99z6r DeviceMajor:0 DeviceMinor:759 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1029 DeviceMajor:0 DeviceMinor:1029 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c34b9543f3e2068cde8c2b7bd9a04ad41c16f834956cffb18edf070cdda1c25d/userdata/shm DeviceMajor:0 DeviceMinor:334 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/255784ad-b52a-4c5c-ad15-278865ee2ccb/volumes/kubernetes.io~projected/kube-api-access-hxsxw DeviceMajor:0 DeviceMinor:955 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/67f4e002-26fb-41e3-abdb-f4928b6c561f/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:414 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1152 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-835 DeviceMajor:0 DeviceMinor:835 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a9aeacf90564eae1348bcdc7f41abed1c44fe0cbc7faf0930e743893a5e4611/userdata/shm DeviceMajor:0 DeviceMinor:1119 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1230 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:741 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-478 DeviceMajor:0 DeviceMinor:478 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-825 DeviceMajor:0 DeviceMinor:825 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c20f637b2a13dfb247a3370a860f01309bff13bd9c879b2139d436b648ea6361/userdata/shm DeviceMajor:0 DeviceMinor:838 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-227 DeviceMajor:0 DeviceMinor:227 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962/volumes/kubernetes.io~projected/kube-api-access-h6zxf DeviceMajor:0 DeviceMinor:934 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-761 DeviceMajor:0 DeviceMinor:761 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/80c48134-cb22-4cf9-b076-ce39af2f4113/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:557 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/volumes/kubernetes.io~projected/kube-api-access-vdxnk DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-779 DeviceMajor:0 DeviceMinor:779 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-849 DeviceMajor:0 DeviceMinor:849 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bab5125-f4d7-4940-891f-9bb6a2145fac/volumes/kubernetes.io~projected/kube-api-access-7rhlw DeviceMajor:0 DeviceMinor:695 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2576028c-40d8-4ef4-ba41-a5aff01f2ed3/volumes/kubernetes.io~projected/kube-api-access-tmwjp DeviceMajor:0 DeviceMinor:547 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/76470062-ab83-47ed-a669-deeb71996548/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1016 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7fde19c2-64b1-409c-ad9c-2bb213a1cc74/volumes/kubernetes.io~projected/kube-api-access-64lwt DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c9ed390-3b62-4b81-8c03-0c579a4a686a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9e07d3-d157-4948-84a6-04b8aa7eef4c/volumes/kubernetes.io~projected/kube-api-access-nqt9k DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c3267e5-390a-40a3-bff8-1d1d81fb9a17/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76529f4c-70b1-4fcb-ba48-ae929228f9fc/volumes/kubernetes.io~projected/kube-api-access-wfd6c DeviceMajor:0 DeviceMinor:827 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/volumes/kubernetes.io~projected/kube-api-access-mj4rq DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ace60ebd-e405-4fd2-96fe-7b16a9e11a07/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:742 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-792 DeviceMajor:0 DeviceMinor:792 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a59746bb-7d76-4fd7-8323-5b92be63afb9/volumes/kubernetes.io~projected/kube-api-access-txq5k DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1060 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/05f5dd54ba8bf6eb7c86554d066ae4a9cf207bcf69ebdccd0c79c526a47c6239/userdata/shm DeviceMajor:0 DeviceMinor:141 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c50a2aec-7ed0-4114-8b25-19579fe931cb/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b283bd8e-3339-4701-ae3c-f009e498b7d4/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:558 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31f0caeb4e0573e4a148b9c44d3f2f8155d69135fdefa05921e7738e4aa0f4e6/userdata/shm DeviceMajor:0 DeviceMinor:767 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/volumes/kubernetes.io~projected/kube-api-access-bq48l DeviceMajor:0 DeviceMinor:926 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1168 DeviceMajor:0 DeviceMinor:1168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15a571c6-7c47-4b57-bc5b-e46544a114c8/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b9d54aa-5f71-4a82-8e71-401ed3083a13/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-209 DeviceMajor:0 DeviceMinor:209 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/75c58162-a0ba-40f4-8894-38f17dc2fb6d/volumes/kubernetes.io~projected/kube-api-access-gkz72 DeviceMajor:0 DeviceMinor:550 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1257 DeviceMajor:0 DeviceMinor:1257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5301cbc9-b3f3-4b2d-a114-1ba0752462f1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/494087b2-b532-4c62-89d5-b88a152fa5db/volumes/kubernetes.io~projected/kube-api-access-z4hzx DeviceMajor:0 DeviceMinor:932 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:359 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-493 DeviceMajor:0 DeviceMinor:493 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/05c9cb4a-5249-4116-a2e5-caa7859e2075/volumes/kubernetes.io~projected/kube-api-access-qrksf DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43560ec3-3526-40e1-aeb7-e3137a99171d/volumes/kubernetes.io~projected/kube-api-access-j4z8t DeviceMajor:0 DeviceMinor:1133 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1236 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1082261815c7e19 MacAddress:7a:30:c2:3a:96:bb Speed:10000 Mtu:8900} {Name:1661a18dd333409 MacAddress:aa:0c:6b:07:20:fe Speed:10000 Mtu:8900} {Name:1760667bc1ae6e6 MacAddress:e6:0f:58:9c:60:1e Speed:10000 Mtu:8900} {Name:1be6fbce0be2d2a MacAddress:7e:7d:17:24:36:4f Speed:10000 Mtu:8900} {Name:1bf12b7aaff989d MacAddress:52:00:bb:23:e4:69 Speed:10000 Mtu:8900} {Name:215b1ea5727b014 MacAddress:8e:e5:46:71:34:ec Speed:10000 Mtu:8900} {Name:2bcb98d1b68dc89 MacAddress:a2:0b:3f:e6:b3:4a Speed:10000 Mtu:8900} {Name:2e210c3c8004e77 MacAddress:5a:0f:1b:2c:10:c8 Speed:10000 Mtu:8900} {Name:31f0caeb4e0573e MacAddress:da:15:9c:82:35:a9 Speed:10000 Mtu:8900} {Name:37b14f21eea6ae0 MacAddress:6a:39:eb:58:03:d6 Speed:10000 Mtu:8900} {Name:383b491b9f27144 MacAddress:fe:12:ac:45:fa:e4 Speed:10000 Mtu:8900} {Name:3b52f4ccabc096d MacAddress:56:95:dd:f9:91:bf Speed:10000 Mtu:8900} {Name:3d24aaf417d59fb MacAddress:72:85:ee:ac:00:84 Speed:10000 Mtu:8900} {Name:40c5200e9b9335d MacAddress:ca:ed:fc:15:d7:01 Speed:10000 Mtu:8900} {Name:45197931f8b0fad MacAddress:16:8f:d2:b2:4f:6b Speed:10000 Mtu:8900} {Name:489ce9d0a231fe7 MacAddress:a6:d9:aa:01:fc:16 Speed:10000 Mtu:8900} {Name:48d1ac933722c35 MacAddress:de:a4:9d:ad:6e:b3 Speed:10000 Mtu:8900} {Name:48d4606b470a81b MacAddress:36:5b:51:99:25:8c Speed:10000 Mtu:8900} {Name:499dfae4e38579d MacAddress:b6:80:a8:b2:97:16 Speed:10000 Mtu:8900} {Name:4a4075ac7bf30cf MacAddress:7e:f0:72:db:73:0d Speed:10000 Mtu:8900} {Name:544bd972dc91af9 MacAddress:7e:33:81:d8:94:2c Speed:10000 Mtu:8900} {Name:5c820d0ae9471b6 MacAddress:7a:25:0e:3e:19:65 Speed:10000 Mtu:8900} {Name:5e2c5960bcaff75 MacAddress:9e:0d:ec:33:dc:e3 Speed:10000 Mtu:8900} {Name:5f264243f9d37a0 MacAddress:ea:f9:a6:56:a3:a8 Speed:10000 Mtu:8900} {Name:61a11a661104fcf MacAddress:aa:0a:de:c2:6e:b2 Speed:10000 Mtu:8900} {Name:62011c22e1ac970 MacAddress:72:aa:fa:ed:5b:3e Speed:10000 Mtu:8900} {Name:63a61882dcf7778 MacAddress:4a:45:92:69:bf:5d Speed:10000 Mtu:8900} {Name:7113d80392d29ba MacAddress:86:ed:28:29:83:b6 Speed:10000 Mtu:8900} {Name:7201246ec91870a MacAddress:6e:eb:94:de:7e:1d Speed:10000 Mtu:8900} {Name:75ebc0148d076f2 MacAddress:02:cb:17:b2:a8:ce Speed:10000 Mtu:8900} {Name:7a7a2b85bd49039 MacAddress:be:2e:dc:17:62:77 Speed:10000 Mtu:8900} {Name:7e8e2788d3f71b9 MacAddress:6a:cf:29:4f:e3:76 Speed:10000 Mtu:8900} {Name:81ed4699f10fea3 MacAddress:b2:9b:5f:0b:ff:55 Speed:10000 Mtu:8900} {Name:87e7bba244435f8 MacAddress:c6:b7:00:98:51:84 Speed:10000 Mtu:8900} {Name:8f207fe64bef8b4 MacAddress:d6:9a:4c:b0:59:bd Speed:10000 Mtu:8900} {Name:8fedd22b9da118b MacAddress:e2:ac:3e:a8:51:38 Speed:10000 Mtu:8900} {Name:91f1c7bcd88e0a3 MacAddress:1e:bb:d7:f9:6c:47 Speed:10000 Mtu:8900} {Name:98805e3ec9d2d2f MacAddress:f2:38:d3:f0:b9:14 Speed:10000 Mtu:8900} {Name:9e00ccb287dd8b9 MacAddress:42:bd:69:68:4e:50 Speed:10000 Mtu:8900} {Name:9f34b77802d1842 MacAddress:aa:29:7e:9b:64:43 Speed:10000 Mtu:8900} {Name:a28c1fb386c9688 MacAddress:32:6f:89:6a:a8:eb Speed:10000 Mtu:8900} {Name:a2cbe0145530499 MacAddress:d2:9b:82:1c:ac:75 Speed:10000 Mtu:8900} {Name:a97067053251ed5 MacAddress:66:ea:fc:a1:75:88 Speed:10000 Mtu:8900} {Name:a998a368841f373 MacAddress:2a:9b:a0:77:51:ff Speed:10000 Mtu:8900} {Name:b1ed6c4c3d12558 MacAddress:5e:18:3d:70:77:a8 Speed:10000 Mtu:8900} {Name:b7d96d2b840dcb0 MacAddress:a2:1c:ce:2f:d1:43 Speed:10000 Mtu:8900} {Name:ba26fc62b4c67c0 MacAddress:96:37:93:41:bc:e4 Speed:10000 Mtu:8900} {Name:bc3fc06d095cd3d MacAddress:da:b8:0a:87:18:ea Speed:10000 Mtu:8900} {Name:bfb8eb142f502ea MacAddress:ee:42:e1:fc:3a:4e Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7e:a3:96:7e:42:f6 Speed:0 Mtu:8900} {Name:c20f637b2a13dfb MacAddress:ba:bc:2a:87:60:86 Speed:10000 Mtu:8900} {Name:c34b9543f3e2068 MacAddress:aa:86:4f:85:eb:e7 Speed:10000 Mtu:8900} {Name:cbe8c564562ad68 MacAddress:86:5c:09:41:74:7f Speed:10000 Mtu:8900} {Name:cf1ab0e9895c4d3 MacAddress:92:f8:66:e7:d0:fe Speed:10000 Mtu:8900} {Name:da07760d7571f38 MacAddress:2e:13:a7:00:cb:66 Speed:10000 Mtu:8900} {Name:e2878c5bde889c9 MacAddress:86:e2:d0:98:63:a9 Speed:10000 Mtu:8900} {Name:eba23b843b06a31 MacAddress:ee:33:89:21:d4:7f Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:80:8b:c0 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bd:d1:82 Speed:-1 Mtu:9000} {Name:f203fd813bb9fb3 MacAddress:82:c2:25:e6:65:c4 Speed:10000 Mtu:8900} {Name:f366572292d05f4 MacAddress:02:38:28:0e:56:28 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:bd:f6:a4:63:b0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.870781 33867 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.870869 33867 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871152 33867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871378 33867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871418 33867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871718 33867 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871730 33867 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 03:23:14.871733 master-0 kubenswrapper[33867]: I0219 03:23:14.871741 33867 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.871771 33867 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.871815 33867 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.871922 33867 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.872006 33867 kubelet.go:418] "Attempting to sync node with API server" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.872021 33867 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.872037 33867 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.872053 33867 kubelet.go:324] "Adding apiserver pod source" Feb 19 03:23:14.872916 master-0 kubenswrapper[33867]: I0219 03:23:14.872076 33867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 03:23:14.874911 master-0 kubenswrapper[33867]: I0219 03:23:14.874501 33867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 19 03:23:14.874911 master-0 kubenswrapper[33867]: I0219 03:23:14.874782 33867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 03:23:14.876823 master-0 kubenswrapper[33867]: I0219 03:23:14.876757 33867 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 03:23:14.877139 master-0 kubenswrapper[33867]: I0219 03:23:14.877080 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 03:23:14.877139 master-0 kubenswrapper[33867]: I0219 03:23:14.877129 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 03:23:14.877139 master-0 kubenswrapper[33867]: I0219 03:23:14.877141 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877152 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877163 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877181 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877192 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877202 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877214 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877223 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877239 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877277 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 03:23:14.877559 master-0 kubenswrapper[33867]: I0219 03:23:14.877329 33867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 03:23:14.878206 master-0 kubenswrapper[33867]: I0219 03:23:14.878159 33867 server.go:1280] "Started kubelet" Feb 19 03:23:14.878534 master-0 kubenswrapper[33867]: I0219 03:23:14.878400 33867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 03:23:14.882675 master-0 kubenswrapper[33867]: I0219 03:23:14.878561 33867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 03:23:14.882675 master-0 kubenswrapper[33867]: I0219 03:23:14.878693 33867 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 19 03:23:14.879088 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 19 03:23:14.890301 master-0 kubenswrapper[33867]: I0219 03:23:14.888547 33867 server.go:449] "Adding debug handlers to kubelet server" Feb 19 03:23:14.890301 master-0 kubenswrapper[33867]: I0219 03:23:14.888721 33867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 03:23:14.908218 master-0 kubenswrapper[33867]: E0219 03:23:14.908157 33867 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 19 03:23:14.916656 master-0 kubenswrapper[33867]: I0219 03:23:14.916391 33867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916664 33867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916746 33867 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916753 33867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-20 02:55:16 +0000 UTC, rotation deadline is 2026-02-19 22:59:32.529414482 +0000 UTC Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916800 33867 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916807 33867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h36m17.612610599s for next certificate rotation Feb 19 03:23:14.916811 master-0 kubenswrapper[33867]: I0219 03:23:14.916793 33867 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 03:23:14.917056 master-0 kubenswrapper[33867]: E0219 03:23:14.916844 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:14.918000 master-0 kubenswrapper[33867]: I0219 03:23:14.917961 33867 factory.go:55] Registering systemd factory Feb 19 03:23:14.918000 master-0 kubenswrapper[33867]: I0219 03:23:14.917989 33867 factory.go:221] Registration of the systemd container factory successfully Feb 19 03:23:14.918554 master-0 kubenswrapper[33867]: I0219 03:23:14.918497 33867 factory.go:153] Registering CRI-O factory Feb 19 03:23:14.918624 master-0 kubenswrapper[33867]: I0219 03:23:14.918548 33867 factory.go:221] Registration of the crio container factory successfully Feb 19 03:23:14.918720 master-0 kubenswrapper[33867]: I0219 03:23:14.918708 33867 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 03:23:14.918764 master-0 kubenswrapper[33867]: I0219 03:23:14.918743 33867 factory.go:103] Registering Raw factory Feb 19 03:23:14.918803 master-0 kubenswrapper[33867]: I0219 03:23:14.918766 33867 manager.go:1196] Started watching for new ooms in manager Feb 19 03:23:14.919542 master-0 kubenswrapper[33867]: I0219 03:23:14.919506 33867 manager.go:319] Starting recovery of all containers Feb 19 03:23:14.934965 master-0 kubenswrapper[33867]: I0219 03:23:14.934869 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2576028c-40d8-4ef4-ba41-a5aff01f2ed3" volumeName="kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp" seLinuxMountContext="" Feb 19 03:23:14.934965 master-0 kubenswrapper[33867]: I0219 03:23:14.934953 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67624ad2-babb-4b0e-9599-99325c286b22" volumeName="kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.934979 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dabc3c9b-ed58-4fd4-8735-65d504fa299a" volumeName="kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.934998 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935017 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935069 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935088 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59cea4cb-6374-49b6-97b3-d8a19cc1860f" volumeName="kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935105 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="decd8c56-e0f0-4119-917f-56652c8f8372" volumeName="kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935128 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e81865-21fa-4e35-a870-738c13ac5b70" volumeName="kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935149 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935168 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0664d88f-f697-4182-93cd-f208ff6f3ac2" volumeName="kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r" seLinuxMountContext="" Feb 19 03:23:14.935192 master-0 kubenswrapper[33867]: I0219 03:23:14.935186 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06898300-c6e2-4d64-9ebf-d20f4338cccc" volumeName="kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935203 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935224 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76529f4c-70b1-4fcb-ba48-ae929228f9fc" volumeName="kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935243 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06898300-c6e2-4d64-9ebf-d20f4338cccc" volumeName="kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935291 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" volumeName="kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935317 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935346 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935364 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61abb34a-08f0-4438-9a89-c712b2048878" volumeName="kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935381 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935398 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935417 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76470062-ab83-47ed-a669-deeb71996548" volumeName="kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935435 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92804daf-1fd0-4008-afff-4f9bc362990b" volumeName="kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935452 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af5828ea-090f-4c8f-90e6-c4e405e69ec5" volumeName="kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935470 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935491 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935518 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935543 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ca08cc0-cc64-4e13-9465-c9b0bfacb60d" volumeName="kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935569 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61abb34a-08f0-4438-9a89-c712b2048878" volumeName="kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.935589 master-0 kubenswrapper[33867]: I0219 03:23:14.935593 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7012676e-f35d-46e5-83e8-a63172dd076e" volumeName="kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935611 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6fae256-6a2e-45e7-8f2f-d471f46ad3b2" volumeName="kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935628 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75c58162-a0ba-40f4-8894-38f17dc2fb6d" volumeName="kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935647 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ca08cc0-cc64-4e13-9465-c9b0bfacb60d" volumeName="kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935666 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af5828ea-090f-4c8f-90e6-c4e405e69ec5" volumeName="kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935683 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935700 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b137033-0db2-46c9-a526-f8234345e883" volumeName="kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935717 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f7d8fc8-c313-416f-b62b-b54db9944066" volumeName="kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935737 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935754 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4ed0c32-c13f-4c72-b83f-9af19b2950a3" volumeName="kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935772 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935792 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935809 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935828 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af2be4f9-f632-4a72-8f39-c96954403edc" volumeName="kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935847 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935868 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935886 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6acd115e-71e1-4a50-8892-fc6ea2927fec" volumeName="kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935903 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76470062-ab83-47ed-a669-deeb71996548" volumeName="kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935922 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935940 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935962 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af2be4f9-f632-4a72-8f39-c96954403edc" volumeName="kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935980 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b283bd8e-3339-4701-ae3c-f009e498b7d4" volumeName="kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.935997 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936023 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936044 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b137033-0db2-46c9-a526-f8234345e883" volumeName="kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936063 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" volumeName="kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936083 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936102 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bab5125-f4d7-4940-891f-9bb6a2145fac" volumeName="kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936121 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="494087b2-b532-4c62-89d5-b88a152fa5db" volumeName="kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936143 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67f4e002-26fb-41e3-abdb-f4928b6c561f" volumeName="kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq" seLinuxMountContext="" Feb 19 03:23:14.936147 master-0 kubenswrapper[33867]: I0219 03:23:14.936163 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936181 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a676c43c-4e0a-4826-86c1-288260611b09" volumeName="kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936202 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936222 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936241 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936291 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75c58162-a0ba-40f4-8894-38f17dc2fb6d" volumeName="kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936318 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f7d8fc8-c313-416f-b62b-b54db9944066" volumeName="kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936338 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43560ec3-3526-40e1-aeb7-e3137a99171d" volumeName="kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936356 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76470062-ab83-47ed-a669-deeb71996548" volumeName="kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936376 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a676c43c-4e0a-4826-86c1-288260611b09" volumeName="kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936395 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca82f2e9-884e-49d1-9863-e87212d01edc" volumeName="kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936413 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936432 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf1a1c6-f858-4f89-ac8c-97d13ed8a962" volumeName="kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936452 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92804daf-1fd0-4008-afff-4f9bc362990b" volumeName="kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936469 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca82f2e9-884e-49d1-9863-e87212d01edc" volumeName="kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936488 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936506 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936524 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af2be4f9-f632-4a72-8f39-c96954403edc" volumeName="kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936542 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936562 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936580 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936599 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" volumeName="kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936624 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936649 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7be6f9b5-fe27-4df5-b933-63bbb12f680c" volumeName="kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936673 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7012676e-f35d-46e5-83e8-a63172dd076e" volumeName="kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936694 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="858a717b-a44e-4b8d-9974-7451a89cf104" volumeName="kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936714 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936733 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936751 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78702d1c-b5ab-4e00-92da-cb2513a72024" volumeName="kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936769 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="255784ad-b52a-4c5c-ad15-278865ee2ccb" volumeName="kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936787 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43560ec3-3526-40e1-aeb7-e3137a99171d" volumeName="kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936805 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4714ef51-2d24-4938-8c58-80c1485a368b" volumeName="kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936825 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936845 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936864 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80c48134-cb22-4cf9-b076-ce39af2f4113" volumeName="kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936882 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936901 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="255784ad-b52a-4c5c-ad15-278865ee2ccb" volumeName="kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936918 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="546cf649-8e0d-4c8a-a197-412db42e36b6" volumeName="kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936938 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61abb34a-08f0-4438-9a89-c712b2048878" volumeName="kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936957 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6acd115e-71e1-4a50-8892-fc6ea2927fec" volumeName="kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936975 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.936994 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz" seLinuxMountContext="" Feb 19 03:23:14.937001 master-0 kubenswrapper[33867]: I0219 03:23:14.937011 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed2b5ced-d986-4622-9e0a-d39363629408" volumeName="kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937045 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ca08cc0-cc64-4e13-9465-c9b0bfacb60d" volumeName="kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937064 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" volumeName="kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937093 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2576028c-40d8-4ef4-ba41-a5aff01f2ed3" volumeName="kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937115 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937134 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937154 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="546cf649-8e0d-4c8a-a197-412db42e36b6" volumeName="kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937173 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937192 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937213 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c50a2aec-7ed0-4114-8b25-19579fe931cb" volumeName="kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937232 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0664d88f-f697-4182-93cd-f208ff6f3ac2" volumeName="kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937282 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937314 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33bb562f-84e7-4fcb-b008-416c09a5ecf0" volumeName="kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937342 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78702d1c-b5ab-4e00-92da-cb2513a72024" volumeName="kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937365 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c50a2aec-7ed0-4114-8b25-19579fe931cb" volumeName="kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937384 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937402 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6acd115e-71e1-4a50-8892-fc6ea2927fec" volumeName="kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937421 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937440 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58c6f5a2-c0a8-4636-a057-cedbe0151579" volumeName="kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937459 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c9ed390-3b62-4b81-8c03-0c579a4a686a" volumeName="kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937477 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76529f4c-70b1-4fcb-ba48-ae929228f9fc" volumeName="kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937497 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7012676e-f35d-46e5-83e8-a63172dd076e" volumeName="kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937516 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937536 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af5828ea-090f-4c8f-90e6-c4e405e69ec5" volumeName="kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937556 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937576 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3fab5bbd-672c-4e18-9c1e-438e2360bc54" volumeName="kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937594 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" volumeName="kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937613 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" volumeName="kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937635 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dabc3c9b-ed58-4fd4-8735-65d504fa299a" volumeName="kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937653 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="decd8c56-e0f0-4119-917f-56652c8f8372" volumeName="kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937673 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="255784ad-b52a-4c5c-ad15-278865ee2ccb" volumeName="kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937694 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" volumeName="kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937712 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92804daf-1fd0-4008-afff-4f9bc362990b" volumeName="kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937731 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b283bd8e-3339-4701-ae3c-f009e498b7d4" volumeName="kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937750 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7012676e-f35d-46e5-83e8-a63172dd076e" volumeName="kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937769 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76470062-ab83-47ed-a669-deeb71996548" volumeName="kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937788 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937808 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937827 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" volumeName="kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937845 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937864 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" volumeName="kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937884 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd49d14-d513-4f68-8a87-3cef8a033c58" volumeName="kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937903 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937961 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.937988 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43560ec3-3526-40e1-aeb7-e3137a99171d" volumeName="kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938005 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938024 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938042 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938061 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58c6f5a2-c0a8-4636-a057-cedbe0151579" volumeName="kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938078 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76470062-ab83-47ed-a669-deeb71996548" volumeName="kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938096 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98ac5423-b231-44e5-9545-424d635ed6ee" volumeName="kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938114 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca82f2e9-884e-49d1-9863-e87212d01edc" volumeName="kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938133 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af2be4f9-f632-4a72-8f39-c96954403edc" volumeName="kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938153 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938170 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938188 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bab5125-f4d7-4940-891f-9bb6a2145fac" volumeName="kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938208 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938226 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05c9cb4a-5249-4116-a2e5-caa7859e2075" volumeName="kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938244 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06898300-c6e2-4d64-9ebf-d20f4338cccc" volumeName="kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca" seLinuxMountContext="" Feb 19 03:23:14.938279 master-0 kubenswrapper[33867]: I0219 03:23:14.938304 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06898300-c6e2-4d64-9ebf-d20f4338cccc" volumeName="kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938328 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7be6f9b5-fe27-4df5-b933-63bbb12f680c" volumeName="kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938346 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b9d54aa-5f71-4a82-8e71-401ed3083a13" volumeName="kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938365 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf1a1c6-f858-4f89-ac8c-97d13ed8a962" volumeName="kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938384 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75c58162-a0ba-40f4-8894-38f17dc2fb6d" volumeName="kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938402 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78702d1c-b5ab-4e00-92da-cb2513a72024" volumeName="kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938421 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b29e37-cda9-41a8-a910-3d8f74be3cf3" volumeName="kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938473 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf1a1c6-f858-4f89-ac8c-97d13ed8a962" volumeName="kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938559 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938582 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b29e37-cda9-41a8-a910-3d8f74be3cf3" volumeName="kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938602 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="255784ad-b52a-4c5c-ad15-278865ee2ccb" volumeName="kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938621 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938638 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af5828ea-090f-4c8f-90e6-c4e405e69ec5" volumeName="kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938700 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="494087b2-b532-4c62-89d5-b88a152fa5db" volumeName="kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938745 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43560ec3-3526-40e1-aeb7-e3137a99171d" volumeName="kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938793 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76529f4c-70b1-4fcb-ba48-ae929228f9fc" volumeName="kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938815 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c50a2aec-7ed0-4114-8b25-19579fe931cb" volumeName="kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938867 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a71c6d42-5ff9-4e96-900c-6e2166bbc9e3" volumeName="kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938888 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938907 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33bb562f-84e7-4fcb-b008-416c09a5ecf0" volumeName="kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938975 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67f4e002-26fb-41e3-abdb-f4928b6c561f" volumeName="kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.938994 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98ac5423-b231-44e5-9545-424d635ed6ee" volumeName="kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939099 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf1a1c6-f858-4f89-ac8c-97d13ed8a962" volumeName="kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939120 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939229 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2576028c-40d8-4ef4-ba41-a5aff01f2ed3" volumeName="kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939252 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939380 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6acd115e-71e1-4a50-8892-fc6ea2927fec" volumeName="kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939400 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ae2cbe0-aa0a-4f26-994b-660fb962d995" volumeName="kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939444 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec677f3d-06c4-4cf4-9f24-69894b9a9118" volumeName="kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939530 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2576028c-40d8-4ef4-ba41-a5aff01f2ed3" volumeName="kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939579 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" volumeName="kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939599 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939616 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939644 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" volumeName="kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939663 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="858a717b-a44e-4b8d-9974-7451a89cf104" volumeName="kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939682 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e81865-21fa-4e35-a870-738c13ac5b70" volumeName="kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939701 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939719 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939738 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06898300-c6e2-4d64-9ebf-d20f4338cccc" volumeName="kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939784 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939804 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="858a717b-a44e-4b8d-9974-7451a89cf104" volumeName="kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939822 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" volumeName="kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939840 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" volumeName="kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939895 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b29e37-cda9-41a8-a910-3d8f74be3cf3" volumeName="kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939916 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edc7410-417a-4e55-9276-ac271fd52297" volumeName="kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939935 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b137033-0db2-46c9-a526-f8234345e883" volumeName="kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939954 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ff96ce8-6427-4a42-afa6-8b8bc778f094" volumeName="kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939971 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80c48134-cb22-4cf9-b076-ce39af2f4113" volumeName="kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.939996 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.940018 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" volumeName="kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.940042 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x" seLinuxMountContext="" Feb 19 03:23:14.940043 master-0 kubenswrapper[33867]: I0219 03:23:14.940062 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33bb562f-84e7-4fcb-b008-416c09a5ecf0" volumeName="kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940083 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a59746bb-7d76-4fd7-8323-5b92be63afb9" volumeName="kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940102 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" volumeName="kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940124 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" volumeName="kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940146 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c569676a-51dd-418c-87a5-719c18fe4c95" volumeName="kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940164 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" volumeName="kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940185 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" volumeName="kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940204 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b283bd8e-3339-4701-ae3c-f009e498b7d4" volumeName="kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940224 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e81865-21fa-4e35-a870-738c13ac5b70" volumeName="kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940245 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba0c261-497c-4236-8f14-98ce5c16af59" volumeName="kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940297 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59cea4cb-6374-49b6-97b3-d8a19cc1860f" volumeName="kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940323 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fde19c2-64b1-409c-ad9c-2bb213a1cc74" volumeName="kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940392 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940414 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8f325fb-0075-4a18-ba7e-669ab19bc91a" volumeName="kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940472 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e81865-21fa-4e35-a870-738c13ac5b70" volumeName="kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940499 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" volumeName="kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940520 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bab5125-f4d7-4940-891f-9bb6a2145fac" volumeName="kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940575 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="546cf649-8e0d-4c8a-a197-412db42e36b6" volumeName="kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940595 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92804daf-1fd0-4008-afff-4f9bc362990b" volumeName="kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940615 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52be87c-e707-4269-96da-537708d52b64" volumeName="kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940635 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940654 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ae2cbe0-aa0a-4f26-994b-660fb962d995" volumeName="kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940673 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" volumeName="kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940691 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80c48134-cb22-4cf9-b076-ce39af2f4113" volumeName="kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940712 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" volumeName="kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940729 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" volumeName="kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940747 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58c6f5a2-c0a8-4636-a057-cedbe0151579" volumeName="kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940765 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f7d8fc8-c313-416f-b62b-b54db9944066" volumeName="kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940783 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a571c6-7c47-4b57-bc5b-e46544a114c8" volumeName="kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940802 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" volumeName="kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940820 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af5828ea-090f-4c8f-90e6-c4e405e69ec5" volumeName="kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940837 33867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dabc3c9b-ed58-4fd4-8735-65d504fa299a" volumeName="kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc" seLinuxMountContext="" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940858 33867 reconstruct.go:97] "Volume reconstruction finished" Feb 19 03:23:14.941706 master-0 kubenswrapper[33867]: I0219 03:23:14.940872 33867 reconciler.go:26] "Reconciler: start to sync state" Feb 19 03:23:14.950807 master-0 kubenswrapper[33867]: I0219 03:23:14.950720 33867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 03:23:14.954072 master-0 kubenswrapper[33867]: I0219 03:23:14.953994 33867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 03:23:14.954193 master-0 kubenswrapper[33867]: I0219 03:23:14.954078 33867 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 03:23:14.954193 master-0 kubenswrapper[33867]: I0219 03:23:14.954116 33867 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 03:23:14.954444 master-0 kubenswrapper[33867]: E0219 03:23:14.954197 33867 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 03:23:14.972758 master-0 kubenswrapper[33867]: I0219 03:23:14.972691 33867 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="0b461f34d367324dba43f9d8dc1f9f03674c68ca7ee50c7c17368a3d5dc7170e" exitCode=0 Feb 19 03:23:14.972758 master-0 kubenswrapper[33867]: I0219 03:23:14.972740 33867 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="60f5cf312ba315b685c25de92b9f8cc980f0c49a86698d8a695e2b600355cacd" exitCode=0 Feb 19 03:23:14.972758 master-0 kubenswrapper[33867]: I0219 03:23:14.972751 33867 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="49d6109e593a1f6854e4a23b0f0809b7c8251c11ffac6d5d3c63dd533a448342" exitCode=0 Feb 19 03:23:14.975450 master-0 kubenswrapper[33867]: I0219 03:23:14.975394 33867 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108" exitCode=0 Feb 19 03:23:14.985973 master-0 kubenswrapper[33867]: I0219 03:23:14.985914 33867 generic.go:334] "Generic (PLEG): container finished" podID="546cf649-8e0d-4c8a-a197-412db42e36b6" containerID="ef0a9007227e02f27c0fbdb751ad5c29449e9b1fd82d980295aad79e15e072c2" exitCode=0 Feb 19 03:23:14.985973 master-0 kubenswrapper[33867]: I0219 03:23:14.985955 33867 generic.go:334] "Generic (PLEG): container finished" podID="546cf649-8e0d-4c8a-a197-412db42e36b6" containerID="d5baecad6f9da9b942e37d06b6d9c3708141b102b9f1b98a457786b84bf2a523" exitCode=0 Feb 19 03:23:14.989545 master-0 kubenswrapper[33867]: I0219 03:23:14.989488 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4/installer/0.log" Feb 19 03:23:14.989653 master-0 kubenswrapper[33867]: I0219 03:23:14.989603 33867 generic.go:334] "Generic (PLEG): container finished" podID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerID="ac0c6f1221931d6368270f9300d1e7df26e99f211f84672a8bd222a9935f47ac" exitCode=1 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001712 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="df79d74c2fc5980bfc6e9850c3ffca3b314448c7df3cef006d2546392b263b4e" exitCode=0 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001785 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a18d99c878639b9d3805f870752927c3437cf7b6b29a033142fd63915d0b18e8" exitCode=0 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001797 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="a1adfe00d9aa195d9236868bc3cdaa7708f6f91c8e97bcc9dc23bf44a824c667" exitCode=0 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001808 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d07c6f7253d4f5bf400e52d3abf09e67dc06d685b2053d96aa22769fe9305dd6" exitCode=0 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001818 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="87ced28296b6205caeec80cb40be9541d7f81c97bea9198b50ce4babeda1daa1" exitCode=0 Feb 19 03:23:15.001828 master-0 kubenswrapper[33867]: I0219 03:23:15.001829 33867 generic.go:334] "Generic (PLEG): container finished" podID="cc8f6a27-3dd3-45e0-a206-9f19bbf99df7" containerID="d7038f953677e8d7419f5a2fddb13ce55d744e0baf108c01044bd406543eeae9" exitCode=0 Feb 19 03:23:15.010961 master-0 kubenswrapper[33867]: I0219 03:23:15.010889 33867 generic.go:334] "Generic (PLEG): container finished" podID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" exitCode=0 Feb 19 03:23:15.015601 master-0 kubenswrapper[33867]: I0219 03:23:15.015523 33867 generic.go:334] "Generic (PLEG): container finished" podID="76470062-ab83-47ed-a669-deeb71996548" containerID="047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" exitCode=0 Feb 19 03:23:15.016994 master-0 kubenswrapper[33867]: E0219 03:23:15.016953 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.017781 master-0 kubenswrapper[33867]: I0219 03:23:15.017729 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-k7xlc_6c9ed390-3b62-4b81-8c03-0c579a4a686a/kube-controller-manager-operator/2.log" Feb 19 03:23:15.017856 master-0 kubenswrapper[33867]: I0219 03:23:15.017792 33867 generic.go:334] "Generic (PLEG): container finished" podID="6c9ed390-3b62-4b81-8c03-0c579a4a686a" containerID="a38db84d334bb1ae612379c88129d14d14422aea1a4e6c8d5e3a4de4afd35891" exitCode=255 Feb 19 03:23:15.024577 master-0 kubenswrapper[33867]: I0219 03:23:15.024538 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7c8v_05c9cb4a-5249-4116-a2e5-caa7859e2075/openshift-controller-manager-operator/3.log" Feb 19 03:23:15.024682 master-0 kubenswrapper[33867]: I0219 03:23:15.024597 33867 generic.go:334] "Generic (PLEG): container finished" podID="05c9cb4a-5249-4116-a2e5-caa7859e2075" containerID="20eff9a38f665e5f446346726f2e9ae69e64da44d267bdbea6151ec6a1ecbe55" exitCode=255 Feb 19 03:23:15.027328 master-0 kubenswrapper[33867]: I0219 03:23:15.027250 33867 generic.go:334] "Generic (PLEG): container finished" podID="5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4" containerID="1d99ca0c8f2a8b57be62e387dd79396f9f9921074e539cfaf44cf000be2aa849" exitCode=0 Feb 19 03:23:15.029782 master-0 kubenswrapper[33867]: I0219 03:23:15.029722 33867 generic.go:334] "Generic (PLEG): container finished" podID="d6fae256-6a2e-45e7-8f2f-d471f46ad3b2" containerID="ea3fbe70d15235f707a7c57be5fd384739f1296cedb5a5f878d80b5d8be3b136" exitCode=0 Feb 19 03:23:15.035965 master-0 kubenswrapper[33867]: I0219 03:23:15.035914 33867 generic.go:334] "Generic (PLEG): container finished" podID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerID="21e26a22b1efe279782f76fa7cfe3a983a36a3e7247df0cc7bcc0fa254258e19" exitCode=0 Feb 19 03:23:15.037670 master-0 kubenswrapper[33867]: I0219 03:23:15.037643 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5_0664d88f-f697-4182-93cd-f208ff6f3ac2/control-plane-machine-set-operator/0.log" Feb 19 03:23:15.037670 master-0 kubenswrapper[33867]: I0219 03:23:15.037669 33867 generic.go:334] "Generic (PLEG): container finished" podID="0664d88f-f697-4182-93cd-f208ff6f3ac2" containerID="47c00fb2c67d340bd7a8f33cdbea3ac43d78e7ccbf383a58ca7fe0117068da43" exitCode=1 Feb 19 03:23:15.040449 master-0 kubenswrapper[33867]: I0219 03:23:15.040413 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 03:23:15.040881 master-0 kubenswrapper[33867]: I0219 03:23:15.040846 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/config-sync-controllers/0.log" Feb 19 03:23:15.041366 master-0 kubenswrapper[33867]: I0219 03:23:15.041328 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 03:23:15.041466 master-0 kubenswrapper[33867]: I0219 03:23:15.041365 33867 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="558bcaf3a56a2407b32726dac467c6eaab65b663370647667ae0de65789cdac0" exitCode=1 Feb 19 03:23:15.041466 master-0 kubenswrapper[33867]: I0219 03:23:15.041386 33867 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="c9a8948e6182f0cdb976b661c449d741ee645d844809a7695d74084a213ff139" exitCode=1 Feb 19 03:23:15.041466 master-0 kubenswrapper[33867]: I0219 03:23:15.041395 33867 generic.go:334] "Generic (PLEG): container finished" podID="af2be4f9-f632-4a72-8f39-c96954403edc" containerID="e91ffe706d1ad6df0dfe02b5098676d02a6c7e690163f70c0b4d651c88fb78ce" exitCode=1 Feb 19 03:23:15.042815 master-0 kubenswrapper[33867]: I0219 03:23:15.042776 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5/installer/0.log" Feb 19 03:23:15.042915 master-0 kubenswrapper[33867]: I0219 03:23:15.042824 33867 generic.go:334] "Generic (PLEG): container finished" podID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerID="c81c932fbf92f00371681dc495d0483abb59c68940881cbb310e3f5f398e1f87" exitCode=1 Feb 19 03:23:15.045331 master-0 kubenswrapper[33867]: I0219 03:23:15.045227 33867 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="82a40f80e34c4f63706840b48b0aa48486b2ad68c13d50974f11a3442433c7ea" exitCode=0 Feb 19 03:23:15.045331 master-0 kubenswrapper[33867]: I0219 03:23:15.045316 33867 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd" exitCode=0 Feb 19 03:23:15.054365 master-0 kubenswrapper[33867]: E0219 03:23:15.054287 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:15.055968 master-0 kubenswrapper[33867]: I0219 03:23:15.055915 33867 generic.go:334] "Generic (PLEG): container finished" podID="a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a" containerID="e1fdaebfc69e9354cdd956d93bd8b91f87df452473c04d8a78f864f320d237fa" exitCode=0 Feb 19 03:23:15.057882 master-0 kubenswrapper[33867]: I0219 03:23:15.057845 33867 generic.go:334] "Generic (PLEG): container finished" podID="b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651" containerID="9bdce3951fee565e17f2d28d3fa9bab8451b2a0d85b9fde5d5703fd5c2bc6773" exitCode=0 Feb 19 03:23:15.061773 master-0 kubenswrapper[33867]: I0219 03:23:15.061742 33867 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431" exitCode=0 Feb 19 03:23:15.063790 master-0 kubenswrapper[33867]: I0219 03:23:15.063763 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-jbztp_c791d8d0-6d78-4cdc-bac2-aa39bd3aae21/network-operator/2.log" Feb 19 03:23:15.063790 master-0 kubenswrapper[33867]: I0219 03:23:15.063787 33867 generic.go:334] "Generic (PLEG): container finished" podID="c791d8d0-6d78-4cdc-bac2-aa39bd3aae21" containerID="86f20f93c3f50a3529fa79e0b6468f791d85c5c63dd623a77eb62ec52b0785bc" exitCode=255 Feb 19 03:23:15.065631 master-0 kubenswrapper[33867]: I0219 03:23:15.065603 33867 generic.go:334] "Generic (PLEG): container finished" podID="a59746bb-7d76-4fd7-8323-5b92be63afb9" containerID="075c2f17f8c40de4ef5a43e9679ffb1112b88d0d2cd16e8c3a34569ded3b80e6" exitCode=0 Feb 19 03:23:15.067905 master-0 kubenswrapper[33867]: I0219 03:23:15.067848 33867 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e" exitCode=0 Feb 19 03:23:15.070440 master-0 kubenswrapper[33867]: I0219 03:23:15.070379 33867 generic.go:334] "Generic (PLEG): container finished" podID="8ec16b3a-5d5c-46fe-87f0-89f93a2775ed" containerID="03aa8ad313bda1a2e83a4655bc8e8999ba5eab74fc27bc9c150cae062a8e7328" exitCode=0 Feb 19 03:23:15.072986 master-0 kubenswrapper[33867]: I0219 03:23:15.072920 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-tlhpc_92804daf-1fd0-4008-afff-4f9bc362990b/machine-approver-controller/0.log" Feb 19 03:23:15.073433 master-0 kubenswrapper[33867]: I0219 03:23:15.073403 33867 generic.go:334] "Generic (PLEG): container finished" podID="92804daf-1fd0-4008-afff-4f9bc362990b" containerID="75ea874391f33c0fa200e27a6fbad18b4a8573ebe40f901e494bc7cfe2905ed3" exitCode=255 Feb 19 03:23:15.075342 master-0 kubenswrapper[33867]: I0219 03:23:15.075314 33867 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53" exitCode=0 Feb 19 03:23:15.077162 master-0 kubenswrapper[33867]: I0219 03:23:15.077133 33867 generic.go:334] "Generic (PLEG): container finished" podID="61abb34a-08f0-4438-9a89-c712b2048878" containerID="e967e4bdcd17904293fe64ffaea6f290221329babeb23091aec673f02b8e7ca3" exitCode=0 Feb 19 03:23:15.078865 master-0 kubenswrapper[33867]: I0219 03:23:15.078841 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-92gqk_18b29e37-cda9-41a8-a910-3d8f74be3cf3/service-ca-controller/1.log" Feb 19 03:23:15.078933 master-0 kubenswrapper[33867]: I0219 03:23:15.078868 33867 generic.go:334] "Generic (PLEG): container finished" podID="18b29e37-cda9-41a8-a910-3d8f74be3cf3" containerID="9e9d3d42da46d1a6d18e0de03a09b726c32bb354f1e9ff23661a98024aebe2a1" exitCode=255 Feb 19 03:23:15.081417 master-0 kubenswrapper[33867]: I0219 03:23:15.081392 33867 generic.go:334] "Generic (PLEG): container finished" podID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerID="ca02b8215bf57351b97a8ecbc5b9bfa88dd85ff58f844b1b36f5d8345ce48644" exitCode=0 Feb 19 03:23:15.084680 master-0 kubenswrapper[33867]: I0219 03:23:15.084653 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 03:23:15.086691 master-0 kubenswrapper[33867]: I0219 03:23:15.086617 33867 generic.go:334] "Generic (PLEG): container finished" podID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerID="a7cd657859866d0c60a8c29ef7e8c20807d578f39873e49c5149373c208aeee5" exitCode=1 Feb 19 03:23:15.093542 master-0 kubenswrapper[33867]: I0219 03:23:15.093493 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:23:15.094116 master-0 kubenswrapper[33867]: I0219 03:23:15.094088 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/1.log" Feb 19 03:23:15.095091 master-0 kubenswrapper[33867]: I0219 03:23:15.095054 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" exitCode=255 Feb 19 03:23:15.095091 master-0 kubenswrapper[33867]: I0219 03:23:15.095074 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" exitCode=1 Feb 19 03:23:15.100082 master-0 kubenswrapper[33867]: I0219 03:23:15.100043 33867 generic.go:334] "Generic (PLEG): container finished" podID="76529f4c-70b1-4fcb-ba48-ae929228f9fc" containerID="b48adbbfe50d897c7f889b72b88a99b1525c43d6ccc956e7ebfd7866abe147be" exitCode=0 Feb 19 03:23:15.100082 master-0 kubenswrapper[33867]: I0219 03:23:15.100069 33867 generic.go:334] "Generic (PLEG): container finished" podID="76529f4c-70b1-4fcb-ba48-ae929228f9fc" containerID="908193b1182061490b900a4344890d721c956eb5ad5ebbda4500fde13ae2779d" exitCode=0 Feb 19 03:23:15.101811 master-0 kubenswrapper[33867]: I0219 03:23:15.101779 33867 generic.go:334] "Generic (PLEG): container finished" podID="ace60ebd-e405-4fd2-96fe-7b16a9e11a07" containerID="00383a3b1620e8684e45d2ccf8b35cd07d1cb7977fd9a3bb5991a646c38a78c8" exitCode=0 Feb 19 03:23:15.103605 master-0 kubenswrapper[33867]: I0219 03:23:15.103576 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-dcpwb_2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/cluster-node-tuning-operator/0.log" Feb 19 03:23:15.103680 master-0 kubenswrapper[33867]: I0219 03:23:15.103607 33867 generic.go:334] "Generic (PLEG): container finished" podID="2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5" containerID="df34220d8bbf9f2c919dd6d16618c4c0582bf76fef0068e3cc67cfd63cba32a9" exitCode=1 Feb 19 03:23:15.105896 master-0 kubenswrapper[33867]: I0219 03:23:15.105866 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-mcz8l_fbc2f7d0-4bae-4d4a-b041-a624ec2b9333/openshift-apiserver-operator/1.log" Feb 19 03:23:15.105963 master-0 kubenswrapper[33867]: I0219 03:23:15.105894 33867 generic.go:334] "Generic (PLEG): container finished" podID="fbc2f7d0-4bae-4d4a-b041-a624ec2b9333" containerID="a5ecaa40749c938a80fde33cdf7954d6eceb84a6560fb8894afe0cf368d43640" exitCode=255 Feb 19 03:23:15.108146 master-0 kubenswrapper[33867]: I0219 03:23:15.108091 33867 generic.go:334] "Generic (PLEG): container finished" podID="ca82f2e9-884e-49d1-9863-e87212d01edc" containerID="884fb08aaaf4688bc340b7a7dc22d08a23af01fd1a5c49b78e0797dec6266347" exitCode=0 Feb 19 03:23:15.108146 master-0 kubenswrapper[33867]: I0219 03:23:15.108132 33867 generic.go:334] "Generic (PLEG): container finished" podID="ca82f2e9-884e-49d1-9863-e87212d01edc" containerID="47a9a4e021740b3522fd1067cdf04d17a49d5aecb4e553dbb6033c10cc4cadea" exitCode=0 Feb 19 03:23:15.112514 master-0 kubenswrapper[33867]: I0219 03:23:15.112487 33867 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="b96163b548b39e7368771cc78a7cc93ce0deae1acb7e2556bf2a0d6f06a4eac4" exitCode=0 Feb 19 03:23:15.112514 master-0 kubenswrapper[33867]: I0219 03:23:15.112506 33867 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="0e04df6594fd15b397e2045ad7c4f04fede6b3d68bd63913e230a0f01929b6ec" exitCode=0 Feb 19 03:23:15.112514 master-0 kubenswrapper[33867]: I0219 03:23:15.112515 33867 generic.go:334] "Generic (PLEG): container finished" podID="1f9e07d3-d157-4948-84a6-04b8aa7eef4c" containerID="ad34f3a66db7717f06a16858a5fed120d78982f25b57db7cc0d0805ee1a11f34" exitCode=0 Feb 19 03:23:15.114127 master-0 kubenswrapper[33867]: I0219 03:23:15.114079 33867 generic.go:334] "Generic (PLEG): container finished" podID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerID="13f1d80c6e6d45699a9dea951ab1e9a8aa64be91ab5359ccb9eae52f989fd916" exitCode=0 Feb 19 03:23:15.117115 master-0 kubenswrapper[33867]: E0219 03:23:15.117083 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.119948 master-0 kubenswrapper[33867]: I0219 03:23:15.119884 33867 generic.go:334] "Generic (PLEG): container finished" podID="c569676a-51dd-418c-87a5-719c18fe4c95" containerID="c4d5c5762019844ac155bf741ff3d970597445e33d552d25778d865bebcb593a" exitCode=0 Feb 19 03:23:15.123575 master-0 kubenswrapper[33867]: I0219 03:23:15.123503 33867 generic.go:334] "Generic (PLEG): container finished" podID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerID="e1a07313a2933802cf62d384385baaaecb3c372bcb5aabbcc186bb282740e81b" exitCode=0 Feb 19 03:23:15.127029 master-0 kubenswrapper[33867]: I0219 03:23:15.126975 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-w5pp8_5301cbc9-b3f3-4b2d-a114-1ba0752462f1/kube-scheduler-operator-container/2.log" Feb 19 03:23:15.127123 master-0 kubenswrapper[33867]: I0219 03:23:15.127043 33867 generic.go:334] "Generic (PLEG): container finished" podID="5301cbc9-b3f3-4b2d-a114-1ba0752462f1" containerID="d0d44f45186dc14ce0bc7dc97e190ce8663cf19d313b3812b2eeb67bbc3b7464" exitCode=255 Feb 19 03:23:15.129595 master-0 kubenswrapper[33867]: I0219 03:23:15.129557 33867 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130" exitCode=0 Feb 19 03:23:15.132455 master-0 kubenswrapper[33867]: I0219 03:23:15.132414 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/2.log" Feb 19 03:23:15.132830 master-0 kubenswrapper[33867]: I0219 03:23:15.132794 33867 generic.go:334] "Generic (PLEG): container finished" podID="af5828ea-090f-4c8f-90e6-c4e405e69ec5" containerID="0f6c57986aa44545930dd1ab3e3d24869ff284140d471569cc35e25cea0099c1" exitCode=1 Feb 19 03:23:15.135200 master-0 kubenswrapper[33867]: I0219 03:23:15.135162 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/1.log" Feb 19 03:23:15.135722 master-0 kubenswrapper[33867]: I0219 03:23:15.135688 33867 generic.go:334] "Generic (PLEG): container finished" podID="7012676e-f35d-46e5-83e8-a63172dd076e" containerID="85c05765f6dadb3299427fcae734f7bc6d46d71d6d24a21ddaf8cbc81b5c9220" exitCode=1 Feb 19 03:23:15.137497 master-0 kubenswrapper[33867]: I0219 03:23:15.137457 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_66b05aeb-22a8-4008-a582-072f63cc46bf/installer/0.log" Feb 19 03:23:15.137563 master-0 kubenswrapper[33867]: I0219 03:23:15.137499 33867 generic.go:334] "Generic (PLEG): container finished" podID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerID="11a1463d7472cc347eeb1e18662a7476d3fc447a3850f542c02f496029d3a5bf" exitCode=1 Feb 19 03:23:15.138806 master-0 kubenswrapper[33867]: I0219 03:23:15.138771 33867 generic.go:334] "Generic (PLEG): container finished" podID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerID="32be5e8b93330dd04d423a1444137191a10ffbf90c7167cd6baa0a0571479517" exitCode=0 Feb 19 03:23:15.145378 master-0 kubenswrapper[33867]: I0219 03:23:15.145343 33867 generic.go:334] "Generic (PLEG): container finished" podID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerID="23060c94450b0089de5446d5e52f8e87d35f8af868d80c88ad4e43f6b97218f6" exitCode=0 Feb 19 03:23:15.149495 master-0 kubenswrapper[33867]: I0219 03:23:15.149450 33867 generic.go:334] "Generic (PLEG): container finished" podID="dabc3c9b-ed58-4fd4-8735-65d504fa299a" containerID="250455e2350c62e9673222f5b8f6250c1b8079eede15297818337eff7b21a5a3" exitCode=0 Feb 19 03:23:15.149495 master-0 kubenswrapper[33867]: I0219 03:23:15.149485 33867 generic.go:334] "Generic (PLEG): container finished" podID="dabc3c9b-ed58-4fd4-8735-65d504fa299a" containerID="11e063f31f05dce30b3ceadd89b21b5514f82e1cb9cd2eef54bba9d4c7adf163" exitCode=0 Feb 19 03:23:15.151203 master-0 kubenswrapper[33867]: I0219 03:23:15.151172 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_32f3b8a5-a045-4023-80f8-0d4d297102ab/installer/0.log" Feb 19 03:23:15.151273 master-0 kubenswrapper[33867]: I0219 03:23:15.151206 33867 generic.go:334] "Generic (PLEG): container finished" podID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerID="f67292ebd7452aa7b8fd839fbcb1492de2f1ebff6a04b4076f1b2483b32bdd6d" exitCode=1 Feb 19 03:23:15.153171 master-0 kubenswrapper[33867]: I0219 03:23:15.153141 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/4.log" Feb 19 03:23:15.153233 master-0 kubenswrapper[33867]: I0219 03:23:15.153175 33867 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" exitCode=1 Feb 19 03:23:15.157863 master-0 kubenswrapper[33867]: I0219 03:23:15.157830 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/4.log" Feb 19 03:23:15.158374 master-0 kubenswrapper[33867]: I0219 03:23:15.158332 33867 generic.go:334] "Generic (PLEG): container finished" podID="9ff96ce8-6427-4a42-afa6-8b8bc778f094" containerID="b90069f199c7947b68e733c734020a9de4e5aa13a83198b25050fb89e116e3b5" exitCode=1 Feb 19 03:23:15.164677 master-0 kubenswrapper[33867]: I0219 03:23:15.164643 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:23:15.165135 master-0 kubenswrapper[33867]: I0219 03:23:15.165075 33867 generic.go:334] "Generic (PLEG): container finished" podID="98ac5423-b231-44e5-9545-424d635ed6ee" containerID="4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1" exitCode=1 Feb 19 03:23:15.178423 master-0 kubenswrapper[33867]: I0219 03:23:15.178389 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-s559q_8f7d8fc8-c313-416f-b62b-b54db9944066/manager/1.log" Feb 19 03:23:15.178680 master-0 kubenswrapper[33867]: I0219 03:23:15.178648 33867 generic.go:334] "Generic (PLEG): container finished" podID="8f7d8fc8-c313-416f-b62b-b54db9944066" containerID="027172ba4dcd10cd3e3177cc36691683dffc4cdf627b8d23cdb2d10cafe015ef" exitCode=1 Feb 19 03:23:15.180172 master-0 kubenswrapper[33867]: I0219 03:23:15.180142 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/6.log" Feb 19 03:23:15.180572 master-0 kubenswrapper[33867]: I0219 03:23:15.180544 33867 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" exitCode=255 Feb 19 03:23:15.180572 master-0 kubenswrapper[33867]: I0219 03:23:15.180568 33867 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="c20b9e1e7e9550aa5bfbad939d9f66144cfef2538d416de2194bb171ea06814d" exitCode=0 Feb 19 03:23:15.182667 master-0 kubenswrapper[33867]: I0219 03:23:15.182640 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rm5jg_a52be87c-e707-4269-96da-537708d52b64/approver/1.log" Feb 19 03:23:15.182918 master-0 kubenswrapper[33867]: I0219 03:23:15.182889 33867 generic.go:334] "Generic (PLEG): container finished" podID="a52be87c-e707-4269-96da-537708d52b64" containerID="246e246788c76f41235c1898d383b771146f06c3b5bc939889392a3b403a8a89" exitCode=1 Feb 19 03:23:15.185590 master-0 kubenswrapper[33867]: I0219 03:23:15.185559 33867 generic.go:334] "Generic (PLEG): container finished" podID="15a571c6-7c47-4b57-bc5b-e46544a114c8" containerID="f288826ba3365168a27108ffc9be5733bebebaf28a3b66f0962898e5aed02b61" exitCode=0 Feb 19 03:23:15.187748 master-0 kubenswrapper[33867]: I0219 03:23:15.187723 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 19 03:23:15.188127 master-0 kubenswrapper[33867]: I0219 03:23:15.188100 33867 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb" exitCode=1 Feb 19 03:23:15.188164 master-0 kubenswrapper[33867]: I0219 03:23:15.188125 33867 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8" exitCode=0 Feb 19 03:23:15.191768 master-0 kubenswrapper[33867]: I0219 03:23:15.191736 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 19 03:23:15.192191 master-0 kubenswrapper[33867]: I0219 03:23:15.192165 33867 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327" exitCode=1 Feb 19 03:23:15.192191 master-0 kubenswrapper[33867]: I0219 03:23:15.192184 33867 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="d4ec4e49d4dd98a02afe5ae82b828a0c598d3a1b8c49a3c9012f434a6bee2385" exitCode=0 Feb 19 03:23:15.217626 master-0 kubenswrapper[33867]: E0219 03:23:15.217576 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.254893 master-0 kubenswrapper[33867]: E0219 03:23:15.254793 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:15.318664 master-0 kubenswrapper[33867]: E0219 03:23:15.318477 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.419289 master-0 kubenswrapper[33867]: E0219 03:23:15.419207 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.519955 master-0 kubenswrapper[33867]: E0219 03:23:15.519802 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.620609 master-0 kubenswrapper[33867]: E0219 03:23:15.620296 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.655672 master-0 kubenswrapper[33867]: E0219 03:23:15.655575 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:15.721002 master-0 kubenswrapper[33867]: E0219 03:23:15.720884 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.822187 master-0 kubenswrapper[33867]: E0219 03:23:15.822058 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:15.923177 master-0 kubenswrapper[33867]: E0219 03:23:15.923063 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.023748 master-0 kubenswrapper[33867]: E0219 03:23:16.023673 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.124332 master-0 kubenswrapper[33867]: E0219 03:23:16.124177 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.224741 master-0 kubenswrapper[33867]: E0219 03:23:16.224540 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.325674 master-0 kubenswrapper[33867]: E0219 03:23:16.325580 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.427341 master-0 kubenswrapper[33867]: E0219 03:23:16.426469 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.456517 master-0 kubenswrapper[33867]: E0219 03:23:16.456459 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:16.527156 master-0 kubenswrapper[33867]: E0219 03:23:16.527014 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.627645 master-0 kubenswrapper[33867]: E0219 03:23:16.627557 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.728311 master-0 kubenswrapper[33867]: E0219 03:23:16.728159 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.829185 master-0 kubenswrapper[33867]: E0219 03:23:16.829003 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:16.929935 master-0 kubenswrapper[33867]: E0219 03:23:16.929821 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.030982 master-0 kubenswrapper[33867]: E0219 03:23:17.030871 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.131501 master-0 kubenswrapper[33867]: E0219 03:23:17.131428 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.232323 master-0 kubenswrapper[33867]: E0219 03:23:17.232158 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.332976 master-0 kubenswrapper[33867]: E0219 03:23:17.332901 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.433707 master-0 kubenswrapper[33867]: E0219 03:23:17.433509 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.534019 master-0 kubenswrapper[33867]: E0219 03:23:17.533945 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.634581 master-0 kubenswrapper[33867]: E0219 03:23:17.634490 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.735380 master-0 kubenswrapper[33867]: E0219 03:23:17.735209 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.836043 master-0 kubenswrapper[33867]: E0219 03:23:17.835956 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:17.936935 master-0 kubenswrapper[33867]: E0219 03:23:17.936844 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.037710 master-0 kubenswrapper[33867]: E0219 03:23:18.037544 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.057024 master-0 kubenswrapper[33867]: E0219 03:23:18.056877 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:18.138590 master-0 kubenswrapper[33867]: E0219 03:23:18.138477 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.238776 master-0 kubenswrapper[33867]: E0219 03:23:18.238681 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.339467 master-0 kubenswrapper[33867]: E0219 03:23:18.339330 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.440096 master-0 kubenswrapper[33867]: E0219 03:23:18.440013 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.540571 master-0 kubenswrapper[33867]: E0219 03:23:18.540505 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.641220 master-0 kubenswrapper[33867]: E0219 03:23:18.641108 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.741731 master-0 kubenswrapper[33867]: E0219 03:23:18.741635 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.842434 master-0 kubenswrapper[33867]: E0219 03:23:18.842317 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:18.943448 master-0 kubenswrapper[33867]: E0219 03:23:18.943323 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.044133 master-0 kubenswrapper[33867]: E0219 03:23:19.044062 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.144679 master-0 kubenswrapper[33867]: E0219 03:23:19.144594 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.245501 master-0 kubenswrapper[33867]: E0219 03:23:19.245361 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.346255 master-0 kubenswrapper[33867]: E0219 03:23:19.346153 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.447050 master-0 kubenswrapper[33867]: E0219 03:23:19.446935 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.547634 master-0 kubenswrapper[33867]: E0219 03:23:19.547468 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.647976 master-0 kubenswrapper[33867]: E0219 03:23:19.647901 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.748446 master-0 kubenswrapper[33867]: E0219 03:23:19.748335 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.849465 master-0 kubenswrapper[33867]: E0219 03:23:19.849294 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:19.950797 master-0 kubenswrapper[33867]: E0219 03:23:19.950725 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.050923 master-0 kubenswrapper[33867]: E0219 03:23:20.050857 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.151598 master-0 kubenswrapper[33867]: E0219 03:23:20.151544 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.233590 master-0 kubenswrapper[33867]: I0219 03:23:20.233515 33867 generic.go:334] "Generic (PLEG): container finished" podID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" containerID="efd7a12795a097f3f4ab229c7e4cfe83afd7b3d6586c831bcff29d6a1d12a9eb" exitCode=0 Feb 19 03:23:20.252857 master-0 kubenswrapper[33867]: E0219 03:23:20.252760 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.353561 master-0 kubenswrapper[33867]: E0219 03:23:20.353457 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.454588 master-0 kubenswrapper[33867]: E0219 03:23:20.454418 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.555472 master-0 kubenswrapper[33867]: E0219 03:23:20.555390 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.656080 master-0 kubenswrapper[33867]: E0219 03:23:20.655955 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.756673 master-0 kubenswrapper[33867]: E0219 03:23:20.756500 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.857424 master-0 kubenswrapper[33867]: E0219 03:23:20.857315 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:20.957911 master-0 kubenswrapper[33867]: E0219 03:23:20.957846 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.058473 master-0 kubenswrapper[33867]: E0219 03:23:21.058333 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.158817 master-0 kubenswrapper[33867]: E0219 03:23:21.158751 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.258147 master-0 kubenswrapper[33867]: E0219 03:23:21.258044 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:21.259264 master-0 kubenswrapper[33867]: E0219 03:23:21.259206 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.359927 master-0 kubenswrapper[33867]: E0219 03:23:21.359789 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.460703 master-0 kubenswrapper[33867]: E0219 03:23:21.460607 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.561283 master-0 kubenswrapper[33867]: E0219 03:23:21.561209 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.661879 master-0 kubenswrapper[33867]: E0219 03:23:21.661774 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.762513 master-0 kubenswrapper[33867]: E0219 03:23:21.762397 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.863445 master-0 kubenswrapper[33867]: E0219 03:23:21.863361 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:21.964641 master-0 kubenswrapper[33867]: E0219 03:23:21.964492 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.065322 master-0 kubenswrapper[33867]: E0219 03:23:22.065196 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.165921 master-0 kubenswrapper[33867]: E0219 03:23:22.165847 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.266852 master-0 kubenswrapper[33867]: E0219 03:23:22.266686 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.367429 master-0 kubenswrapper[33867]: E0219 03:23:22.367328 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.467551 master-0 kubenswrapper[33867]: E0219 03:23:22.467473 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.568150 master-0 kubenswrapper[33867]: E0219 03:23:22.567980 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.668831 master-0 kubenswrapper[33867]: E0219 03:23:22.668711 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.769342 master-0 kubenswrapper[33867]: E0219 03:23:22.769196 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.870415 master-0 kubenswrapper[33867]: E0219 03:23:22.870135 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:22.970420 master-0 kubenswrapper[33867]: E0219 03:23:22.970316 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.070964 master-0 kubenswrapper[33867]: E0219 03:23:23.070877 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.171404 master-0 kubenswrapper[33867]: E0219 03:23:23.171333 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.272431 master-0 kubenswrapper[33867]: E0219 03:23:23.272335 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.373250 master-0 kubenswrapper[33867]: E0219 03:23:23.373131 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.474486 master-0 kubenswrapper[33867]: E0219 03:23:23.474341 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.574666 master-0 kubenswrapper[33867]: E0219 03:23:23.574593 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.674782 master-0 kubenswrapper[33867]: E0219 03:23:23.674722 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.774928 master-0 kubenswrapper[33867]: E0219 03:23:23.774814 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.875903 master-0 kubenswrapper[33867]: E0219 03:23:23.875834 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:23.976439 master-0 kubenswrapper[33867]: E0219 03:23:23.976393 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.077532 master-0 kubenswrapper[33867]: E0219 03:23:24.077447 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.178148 master-0 kubenswrapper[33867]: E0219 03:23:24.178075 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.268722 master-0 kubenswrapper[33867]: I0219 03:23:24.268662 33867 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="d18413342a722838be3aeba368600d701226af1bb0655a2558eb4a099c9c2796" exitCode=0 Feb 19 03:23:24.278866 master-0 kubenswrapper[33867]: E0219 03:23:24.278804 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.380021 master-0 kubenswrapper[33867]: E0219 03:23:24.379964 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.480468 master-0 kubenswrapper[33867]: E0219 03:23:24.480417 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.580937 master-0 kubenswrapper[33867]: E0219 03:23:24.580870 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.681710 master-0 kubenswrapper[33867]: E0219 03:23:24.681524 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.782646 master-0 kubenswrapper[33867]: E0219 03:23:24.782514 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.884113 master-0 kubenswrapper[33867]: E0219 03:23:24.884037 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:24.896579 master-0 kubenswrapper[33867]: I0219 03:23:24.896500 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:24.896579 master-0 kubenswrapper[33867]: W0219 03:23:24.896520 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: I0219 03:23:24.896629 33867 trace.go:236] Trace[537344824]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 03:23:14.872) (total time: 10024ms): Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: Trace[537344824]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused 10024ms (03:23:24.896) Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: Trace[537344824]: [10.024213558s] [10.024213558s] END Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: W0219 03:23:24.896591 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: E0219 03:23:24.896667 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: I0219 03:23:24.896696 33867 trace.go:236] Trace[805971063]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 03:23:14.872) (total time: 10024ms): Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: Trace[805971063]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused 10024ms (03:23:24.896) Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: Trace[805971063]: [10.024222968s] [10.024222968s] END Feb 19 03:23:24.896994 master-0 kubenswrapper[33867]: E0219 03:23:24.896804 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:24.900560 master-0 kubenswrapper[33867]: E0219 03:23:24.900345 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189587d91579ae47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,LastTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:23:24.925383 master-0 kubenswrapper[33867]: E0219 03:23:24.925242 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="200ms" Feb 19 03:23:24.926361 master-0 kubenswrapper[33867]: W0219 03:23:24.926209 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:24.926361 master-0 kubenswrapper[33867]: I0219 03:23:24.926329 33867 trace.go:236] Trace[1638528620]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 03:23:14.916) (total time: 10009ms): Feb 19 03:23:24.926361 master-0 kubenswrapper[33867]: Trace[1638528620]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused 10009ms (03:23:24.926) Feb 19 03:23:24.926361 master-0 kubenswrapper[33867]: Trace[1638528620]: [10.009366658s] [10.009366658s] END Feb 19 03:23:24.926833 master-0 kubenswrapper[33867]: E0219 03:23:24.926368 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:24.966317 master-0 kubenswrapper[33867]: W0219 03:23:24.966221 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:24.966548 master-0 kubenswrapper[33867]: I0219 03:23:24.966335 33867 trace.go:236] Trace[406995577]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 03:23:14.954) (total time: 10011ms): Feb 19 03:23:24.966548 master-0 kubenswrapper[33867]: Trace[406995577]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused 10011ms (03:23:24.966) Feb 19 03:23:24.966548 master-0 kubenswrapper[33867]: Trace[406995577]: [10.011995783s] [10.011995783s] END Feb 19 03:23:24.966548 master-0 kubenswrapper[33867]: E0219 03:23:24.966365 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:24.984811 master-0 kubenswrapper[33867]: E0219 03:23:24.984742 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.085018 master-0 kubenswrapper[33867]: E0219 03:23:25.084931 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.130442 master-0 kubenswrapper[33867]: E0219 03:23:25.130391 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 19 03:23:25.185674 master-0 kubenswrapper[33867]: E0219 03:23:25.185600 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.286194 master-0 kubenswrapper[33867]: E0219 03:23:25.285996 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.386621 master-0 kubenswrapper[33867]: E0219 03:23:25.386575 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.487396 master-0 kubenswrapper[33867]: E0219 03:23:25.487335 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.532187 master-0 kubenswrapper[33867]: E0219 03:23:25.532087 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 19 03:23:25.587909 master-0 kubenswrapper[33867]: E0219 03:23:25.587748 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.630349 master-0 kubenswrapper[33867]: E0219 03:23:25.630305 33867 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:25.630349 master-0 kubenswrapper[33867]: E0219 03:23:25.630344 33867 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Feb 19 03:23:25.688552 master-0 kubenswrapper[33867]: E0219 03:23:25.688482 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.788948 master-0 kubenswrapper[33867]: E0219 03:23:25.788893 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.812724 master-0 kubenswrapper[33867]: W0219 03:23:25.812652 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:25.813152 master-0 kubenswrapper[33867]: E0219 03:23:25.813113 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:25.820939 master-0 kubenswrapper[33867]: W0219 03:23:25.820896 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:25.821200 master-0 kubenswrapper[33867]: E0219 03:23:25.821164 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:25.860697 master-0 kubenswrapper[33867]: E0219 03:23:25.860494 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189587d91579ae47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,LastTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:23:25.889448 master-0 kubenswrapper[33867]: E0219 03:23:25.889374 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:25.897557 master-0 kubenswrapper[33867]: I0219 03:23:25.897516 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:25.897912 master-0 kubenswrapper[33867]: W0219 03:23:25.897875 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:25.898109 master-0 kubenswrapper[33867]: E0219 03:23:25.898078 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:25.989924 master-0 kubenswrapper[33867]: E0219 03:23:25.989878 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.091237 master-0 kubenswrapper[33867]: E0219 03:23:26.091161 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.191815 master-0 kubenswrapper[33867]: E0219 03:23:26.191749 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.258472 master-0 kubenswrapper[33867]: E0219 03:23:26.258351 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:26.292292 master-0 kubenswrapper[33867]: E0219 03:23:26.292173 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.334312 master-0 kubenswrapper[33867]: E0219 03:23:26.334192 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 19 03:23:26.392956 master-0 kubenswrapper[33867]: E0219 03:23:26.392893 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.488218 master-0 kubenswrapper[33867]: W0219 03:23:26.488104 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:26.488588 master-0 kubenswrapper[33867]: E0219 03:23:26.488561 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:26.494051 master-0 kubenswrapper[33867]: E0219 03:23:26.494024 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.594793 master-0 kubenswrapper[33867]: E0219 03:23:26.594744 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.695391 master-0 kubenswrapper[33867]: E0219 03:23:26.695359 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.795899 master-0 kubenswrapper[33867]: E0219 03:23:26.795747 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.896629 master-0 kubenswrapper[33867]: E0219 03:23:26.896572 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:26.897762 master-0 kubenswrapper[33867]: I0219 03:23:26.897692 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:26.997661 master-0 kubenswrapper[33867]: E0219 03:23:26.997565 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.098735 master-0 kubenswrapper[33867]: E0219 03:23:27.098589 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.199534 master-0 kubenswrapper[33867]: E0219 03:23:27.199458 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.300670 master-0 kubenswrapper[33867]: E0219 03:23:27.300616 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.401224 master-0 kubenswrapper[33867]: E0219 03:23:27.401141 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.470284 master-0 kubenswrapper[33867]: W0219 03:23:27.470179 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:27.470284 master-0 kubenswrapper[33867]: E0219 03:23:27.470250 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:27.479718 master-0 kubenswrapper[33867]: W0219 03:23:27.479656 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:27.479718 master-0 kubenswrapper[33867]: E0219 03:23:27.479717 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:27.501591 master-0 kubenswrapper[33867]: E0219 03:23:27.501516 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.602004 master-0 kubenswrapper[33867]: E0219 03:23:27.601928 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.702230 master-0 kubenswrapper[33867]: E0219 03:23:27.702041 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.802391 master-0 kubenswrapper[33867]: E0219 03:23:27.802301 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.898451 master-0 kubenswrapper[33867]: I0219 03:23:27.898391 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:27.902516 master-0 kubenswrapper[33867]: E0219 03:23:27.902464 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:27.935601 master-0 kubenswrapper[33867]: E0219 03:23:27.935542 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 19 03:23:28.003147 master-0 kubenswrapper[33867]: E0219 03:23:28.002968 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.104005 master-0 kubenswrapper[33867]: E0219 03:23:28.103905 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.204676 master-0 kubenswrapper[33867]: E0219 03:23:28.204609 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.305123 master-0 kubenswrapper[33867]: E0219 03:23:28.304974 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.405731 master-0 kubenswrapper[33867]: E0219 03:23:28.405666 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.506466 master-0 kubenswrapper[33867]: E0219 03:23:28.506356 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.510138 master-0 kubenswrapper[33867]: W0219 03:23:28.510041 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:28.510349 master-0 kubenswrapper[33867]: E0219 03:23:28.510152 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:28.607073 master-0 kubenswrapper[33867]: E0219 03:23:28.606874 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.707150 master-0 kubenswrapper[33867]: E0219 03:23:28.706998 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.808354 master-0 kubenswrapper[33867]: E0219 03:23:28.808222 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:28.898968 master-0 kubenswrapper[33867]: I0219 03:23:28.898858 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:28.908934 master-0 kubenswrapper[33867]: E0219 03:23:28.908888 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.009888 master-0 kubenswrapper[33867]: E0219 03:23:29.009793 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.110744 master-0 kubenswrapper[33867]: E0219 03:23:29.110655 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.211100 master-0 kubenswrapper[33867]: E0219 03:23:29.210907 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.289114 master-0 kubenswrapper[33867]: W0219 03:23:29.288958 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:29.289114 master-0 kubenswrapper[33867]: E0219 03:23:29.289103 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:29.311226 master-0 kubenswrapper[33867]: E0219 03:23:29.311102 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.411935 master-0 kubenswrapper[33867]: E0219 03:23:29.411820 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.512412 master-0 kubenswrapper[33867]: E0219 03:23:29.512217 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.613494 master-0 kubenswrapper[33867]: E0219 03:23:29.613351 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.714037 master-0 kubenswrapper[33867]: E0219 03:23:29.713924 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.815229 master-0 kubenswrapper[33867]: E0219 03:23:29.815028 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:29.898906 master-0 kubenswrapper[33867]: I0219 03:23:29.898793 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:29.916125 master-0 kubenswrapper[33867]: E0219 03:23:29.916046 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.017138 master-0 kubenswrapper[33867]: E0219 03:23:30.017030 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.117956 master-0 kubenswrapper[33867]: E0219 03:23:30.117769 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.218644 master-0 kubenswrapper[33867]: E0219 03:23:30.218556 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.319360 master-0 kubenswrapper[33867]: E0219 03:23:30.319245 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.419865 master-0 kubenswrapper[33867]: E0219 03:23:30.419784 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.520476 master-0 kubenswrapper[33867]: E0219 03:23:30.520417 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.620983 master-0 kubenswrapper[33867]: E0219 03:23:30.620907 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.721395 master-0 kubenswrapper[33867]: E0219 03:23:30.721229 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.822275 master-0 kubenswrapper[33867]: E0219 03:23:30.822197 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:30.898965 master-0 kubenswrapper[33867]: I0219 03:23:30.898847 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:30.922993 master-0 kubenswrapper[33867]: E0219 03:23:30.922763 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.023313 master-0 kubenswrapper[33867]: E0219 03:23:31.023128 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.123942 master-0 kubenswrapper[33867]: E0219 03:23:31.123852 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.137202 master-0 kubenswrapper[33867]: E0219 03:23:31.137134 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 19 03:23:31.225088 master-0 kubenswrapper[33867]: E0219 03:23:31.224938 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.259356 master-0 kubenswrapper[33867]: E0219 03:23:31.259308 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:31.326194 master-0 kubenswrapper[33867]: E0219 03:23:31.326075 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.426610 master-0 kubenswrapper[33867]: E0219 03:23:31.426511 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.451130 master-0 kubenswrapper[33867]: W0219 03:23:31.451007 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:31.451130 master-0 kubenswrapper[33867]: E0219 03:23:31.451116 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:31.527238 master-0 kubenswrapper[33867]: E0219 03:23:31.527134 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.628413 master-0 kubenswrapper[33867]: E0219 03:23:31.628253 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.729335 master-0 kubenswrapper[33867]: E0219 03:23:31.729237 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.830179 master-0 kubenswrapper[33867]: E0219 03:23:31.830065 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:31.899008 master-0 kubenswrapper[33867]: I0219 03:23:31.898817 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:31.930297 master-0 kubenswrapper[33867]: E0219 03:23:31.930199 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.030661 master-0 kubenswrapper[33867]: E0219 03:23:32.030587 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.043880 master-0 kubenswrapper[33867]: W0219 03:23:32.043753 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:32.044155 master-0 kubenswrapper[33867]: E0219 03:23:32.043887 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:32.067505 master-0 kubenswrapper[33867]: W0219 03:23:32.067380 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:32.067505 master-0 kubenswrapper[33867]: E0219 03:23:32.067486 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:32.131195 master-0 kubenswrapper[33867]: E0219 03:23:32.131110 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.232065 master-0 kubenswrapper[33867]: E0219 03:23:32.231871 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.332140 master-0 kubenswrapper[33867]: E0219 03:23:32.332057 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.433148 master-0 kubenswrapper[33867]: E0219 03:23:32.433054 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.533953 master-0 kubenswrapper[33867]: E0219 03:23:32.533720 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.634464 master-0 kubenswrapper[33867]: E0219 03:23:32.634382 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.735541 master-0 kubenswrapper[33867]: E0219 03:23:32.735373 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.836095 master-0 kubenswrapper[33867]: E0219 03:23:32.835913 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:32.899512 master-0 kubenswrapper[33867]: I0219 03:23:32.899358 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:32.937062 master-0 kubenswrapper[33867]: E0219 03:23:32.936967 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.038405 master-0 kubenswrapper[33867]: E0219 03:23:33.038328 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.139157 master-0 kubenswrapper[33867]: E0219 03:23:33.139042 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.240309 master-0 kubenswrapper[33867]: E0219 03:23:33.240200 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.340588 master-0 kubenswrapper[33867]: E0219 03:23:33.340519 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.441794 master-0 kubenswrapper[33867]: E0219 03:23:33.441643 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.499714 master-0 kubenswrapper[33867]: W0219 03:23:33.499613 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:33.499714 master-0 kubenswrapper[33867]: E0219 03:23:33.499712 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:33.542402 master-0 kubenswrapper[33867]: E0219 03:23:33.542321 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.643173 master-0 kubenswrapper[33867]: E0219 03:23:33.643079 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.743642 master-0 kubenswrapper[33867]: E0219 03:23:33.743445 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.844908 master-0 kubenswrapper[33867]: E0219 03:23:33.844827 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:33.898645 master-0 kubenswrapper[33867]: I0219 03:23:33.898527 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:33.945942 master-0 kubenswrapper[33867]: E0219 03:23:33.945861 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.046372 master-0 kubenswrapper[33867]: E0219 03:23:34.046187 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.146830 master-0 kubenswrapper[33867]: E0219 03:23:34.146753 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.247689 master-0 kubenswrapper[33867]: E0219 03:23:34.247550 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.348028 master-0 kubenswrapper[33867]: E0219 03:23:34.347901 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.448421 master-0 kubenswrapper[33867]: E0219 03:23:34.448245 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.549464 master-0 kubenswrapper[33867]: E0219 03:23:34.549357 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.650240 master-0 kubenswrapper[33867]: E0219 03:23:34.650154 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.751194 master-0 kubenswrapper[33867]: E0219 03:23:34.751072 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.851558 master-0 kubenswrapper[33867]: E0219 03:23:34.851468 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:34.898117 master-0 kubenswrapper[33867]: I0219 03:23:34.897997 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:34.952485 master-0 kubenswrapper[33867]: E0219 03:23:34.952314 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.052846 master-0 kubenswrapper[33867]: E0219 03:23:35.052768 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.153414 master-0 kubenswrapper[33867]: E0219 03:23:35.153332 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.253951 master-0 kubenswrapper[33867]: E0219 03:23:35.253784 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.354403 master-0 kubenswrapper[33867]: E0219 03:23:35.354327 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.454622 master-0 kubenswrapper[33867]: E0219 03:23:35.454515 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.555762 master-0 kubenswrapper[33867]: E0219 03:23:35.555575 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.616312 master-0 kubenswrapper[33867]: E0219 03:23:35.616227 33867 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:35.616312 master-0 kubenswrapper[33867]: E0219 03:23:35.616289 33867 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Feb 19 03:23:35.655786 master-0 kubenswrapper[33867]: E0219 03:23:35.655726 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.756574 master-0 kubenswrapper[33867]: E0219 03:23:35.756474 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.856758 master-0 kubenswrapper[33867]: E0219 03:23:35.856572 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:35.863046 master-0 kubenswrapper[33867]: E0219 03:23:35.862857 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189587d91579ae47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,LastTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:23:35.899246 master-0 kubenswrapper[33867]: I0219 03:23:35.899148 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:35.957234 master-0 kubenswrapper[33867]: E0219 03:23:35.957135 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.057584 master-0 kubenswrapper[33867]: E0219 03:23:36.057492 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.157999 master-0 kubenswrapper[33867]: E0219 03:23:36.157920 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.258253 master-0 kubenswrapper[33867]: E0219 03:23:36.258193 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.260124 master-0 kubenswrapper[33867]: E0219 03:23:36.260077 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:36.358791 master-0 kubenswrapper[33867]: E0219 03:23:36.358718 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.460056 master-0 kubenswrapper[33867]: E0219 03:23:36.459895 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.560551 master-0 kubenswrapper[33867]: E0219 03:23:36.560469 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.660760 master-0 kubenswrapper[33867]: E0219 03:23:36.660717 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.760987 master-0 kubenswrapper[33867]: E0219 03:23:36.760836 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.862045 master-0 kubenswrapper[33867]: E0219 03:23:36.861952 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:36.898069 master-0 kubenswrapper[33867]: I0219 03:23:36.897985 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:36.962403 master-0 kubenswrapper[33867]: E0219 03:23:36.962340 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.063099 master-0 kubenswrapper[33867]: E0219 03:23:37.062900 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.163983 master-0 kubenswrapper[33867]: E0219 03:23:37.163897 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.264814 master-0 kubenswrapper[33867]: E0219 03:23:37.264719 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.365813 master-0 kubenswrapper[33867]: E0219 03:23:37.365599 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.466144 master-0 kubenswrapper[33867]: E0219 03:23:37.466029 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.538849 master-0 kubenswrapper[33867]: E0219 03:23:37.538780 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 19 03:23:37.566987 master-0 kubenswrapper[33867]: E0219 03:23:37.566915 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.667725 master-0 kubenswrapper[33867]: E0219 03:23:37.667647 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.768145 master-0 kubenswrapper[33867]: E0219 03:23:37.768071 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.869620 master-0 kubenswrapper[33867]: E0219 03:23:37.869521 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:37.897969 master-0 kubenswrapper[33867]: I0219 03:23:37.897890 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:37.970379 master-0 kubenswrapper[33867]: E0219 03:23:37.970131 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.070606 master-0 kubenswrapper[33867]: E0219 03:23:38.070526 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.171727 master-0 kubenswrapper[33867]: E0219 03:23:38.171629 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.272887 master-0 kubenswrapper[33867]: E0219 03:23:38.272739 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.373673 master-0 kubenswrapper[33867]: E0219 03:23:38.373594 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.474294 master-0 kubenswrapper[33867]: E0219 03:23:38.474178 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.575464 master-0 kubenswrapper[33867]: E0219 03:23:38.575331 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.675947 master-0 kubenswrapper[33867]: E0219 03:23:38.675832 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.776545 master-0 kubenswrapper[33867]: E0219 03:23:38.776474 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.877230 master-0 kubenswrapper[33867]: E0219 03:23:38.877170 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:38.898485 master-0 kubenswrapper[33867]: I0219 03:23:38.898420 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:38.978311 master-0 kubenswrapper[33867]: E0219 03:23:38.978195 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.079456 master-0 kubenswrapper[33867]: E0219 03:23:39.079346 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.180283 master-0 kubenswrapper[33867]: E0219 03:23:39.180127 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.281143 master-0 kubenswrapper[33867]: E0219 03:23:39.281074 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.381813 master-0 kubenswrapper[33867]: E0219 03:23:39.381713 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.482781 master-0 kubenswrapper[33867]: E0219 03:23:39.482611 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.583437 master-0 kubenswrapper[33867]: E0219 03:23:39.583295 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.684289 master-0 kubenswrapper[33867]: E0219 03:23:39.684186 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.785316 master-0 kubenswrapper[33867]: E0219 03:23:39.785064 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.886074 master-0 kubenswrapper[33867]: E0219 03:23:39.885980 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:39.898939 master-0 kubenswrapper[33867]: I0219 03:23:39.898828 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:39.986848 master-0 kubenswrapper[33867]: E0219 03:23:39.986745 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.087820 master-0 kubenswrapper[33867]: E0219 03:23:40.087639 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.188590 master-0 kubenswrapper[33867]: E0219 03:23:40.188480 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.248520 master-0 kubenswrapper[33867]: W0219 03:23:40.248384 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:40.248917 master-0 kubenswrapper[33867]: E0219 03:23:40.248531 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:40.289631 master-0 kubenswrapper[33867]: E0219 03:23:40.289563 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.389906 master-0 kubenswrapper[33867]: E0219 03:23:40.389842 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.491033 master-0 kubenswrapper[33867]: E0219 03:23:40.490973 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.592559 master-0 kubenswrapper[33867]: E0219 03:23:40.592473 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.693081 master-0 kubenswrapper[33867]: E0219 03:23:40.692963 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.793537 master-0 kubenswrapper[33867]: E0219 03:23:40.793459 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.893948 master-0 kubenswrapper[33867]: E0219 03:23:40.893882 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:40.898247 master-0 kubenswrapper[33867]: I0219 03:23:40.898185 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:40.994576 master-0 kubenswrapper[33867]: E0219 03:23:40.994417 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.094672 master-0 kubenswrapper[33867]: E0219 03:23:41.094594 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.195707 master-0 kubenswrapper[33867]: E0219 03:23:41.195641 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.261193 master-0 kubenswrapper[33867]: E0219 03:23:41.260716 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:41.296413 master-0 kubenswrapper[33867]: E0219 03:23:41.296349 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.397109 master-0 kubenswrapper[33867]: E0219 03:23:41.397027 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.497294 master-0 kubenswrapper[33867]: E0219 03:23:41.497173 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.598019 master-0 kubenswrapper[33867]: E0219 03:23:41.597812 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.698465 master-0 kubenswrapper[33867]: E0219 03:23:41.698376 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.798876 master-0 kubenswrapper[33867]: E0219 03:23:41.798812 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.898357 master-0 kubenswrapper[33867]: I0219 03:23:41.898299 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:41.899477 master-0 kubenswrapper[33867]: E0219 03:23:41.899447 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:41.999971 master-0 kubenswrapper[33867]: E0219 03:23:41.999902 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.100355 master-0 kubenswrapper[33867]: E0219 03:23:42.100199 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.200922 master-0 kubenswrapper[33867]: E0219 03:23:42.200746 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.301568 master-0 kubenswrapper[33867]: E0219 03:23:42.301499 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.325051 master-0 kubenswrapper[33867]: W0219 03:23:42.324943 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:42.325279 master-0 kubenswrapper[33867]: E0219 03:23:42.325068 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:42.402122 master-0 kubenswrapper[33867]: E0219 03:23:42.402042 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.502569 master-0 kubenswrapper[33867]: E0219 03:23:42.502396 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.602894 master-0 kubenswrapper[33867]: E0219 03:23:42.602823 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.703530 master-0 kubenswrapper[33867]: E0219 03:23:42.703456 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.804501 master-0 kubenswrapper[33867]: E0219 03:23:42.804353 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:42.847161 master-0 kubenswrapper[33867]: W0219 03:23:42.847059 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:42.847161 master-0 kubenswrapper[33867]: E0219 03:23:42.847160 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:42.897665 master-0 kubenswrapper[33867]: I0219 03:23:42.897598 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:42.904759 master-0 kubenswrapper[33867]: E0219 03:23:42.904712 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.005144 master-0 kubenswrapper[33867]: E0219 03:23:43.005074 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.106418 master-0 kubenswrapper[33867]: E0219 03:23:43.106221 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.207511 master-0 kubenswrapper[33867]: E0219 03:23:43.207446 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.308054 master-0 kubenswrapper[33867]: E0219 03:23:43.307979 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.408775 master-0 kubenswrapper[33867]: E0219 03:23:43.408696 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.509793 master-0 kubenswrapper[33867]: E0219 03:23:43.509680 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.593364 master-0 kubenswrapper[33867]: W0219 03:23:43.593215 33867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:43.593598 master-0 kubenswrapper[33867]: E0219 03:23:43.593384 33867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 19 03:23:43.610877 master-0 kubenswrapper[33867]: E0219 03:23:43.610820 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.711129 master-0 kubenswrapper[33867]: E0219 03:23:43.710964 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.811943 master-0 kubenswrapper[33867]: E0219 03:23:43.811875 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:43.898472 master-0 kubenswrapper[33867]: I0219 03:23:43.898378 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:43.912648 master-0 kubenswrapper[33867]: E0219 03:23:43.912509 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.013000 master-0 kubenswrapper[33867]: E0219 03:23:44.012843 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.114082 master-0 kubenswrapper[33867]: E0219 03:23:44.113992 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.215200 master-0 kubenswrapper[33867]: E0219 03:23:44.215118 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.315559 master-0 kubenswrapper[33867]: E0219 03:23:44.315380 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.416127 master-0 kubenswrapper[33867]: E0219 03:23:44.416041 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.516758 master-0 kubenswrapper[33867]: E0219 03:23:44.516686 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.541290 master-0 kubenswrapper[33867]: E0219 03:23:44.541163 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 19 03:23:44.617347 master-0 kubenswrapper[33867]: E0219 03:23:44.617021 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.717566 master-0 kubenswrapper[33867]: E0219 03:23:44.717486 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.817702 master-0 kubenswrapper[33867]: E0219 03:23:44.817633 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:44.898513 master-0 kubenswrapper[33867]: I0219 03:23:44.898416 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:44.918633 master-0 kubenswrapper[33867]: E0219 03:23:44.918524 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.018980 master-0 kubenswrapper[33867]: E0219 03:23:45.018913 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.119924 master-0 kubenswrapper[33867]: E0219 03:23:45.119853 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.220770 master-0 kubenswrapper[33867]: E0219 03:23:45.220603 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.321749 master-0 kubenswrapper[33867]: E0219 03:23:45.321681 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.421911 master-0 kubenswrapper[33867]: E0219 03:23:45.421831 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.523163 master-0 kubenswrapper[33867]: E0219 03:23:45.522994 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.623892 master-0 kubenswrapper[33867]: E0219 03:23:45.623831 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.724682 master-0 kubenswrapper[33867]: E0219 03:23:45.724621 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.825722 master-0 kubenswrapper[33867]: E0219 03:23:45.825573 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:45.864814 master-0 kubenswrapper[33867]: E0219 03:23:45.864627 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189587d91579ae47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,LastTimestamp:2026-02-19 03:23:14.878107207 +0000 UTC m=+0.174777818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:23:45.898038 master-0 kubenswrapper[33867]: I0219 03:23:45.897942 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:45.926166 master-0 kubenswrapper[33867]: E0219 03:23:45.926095 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.027248 master-0 kubenswrapper[33867]: E0219 03:23:46.027175 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.128343 master-0 kubenswrapper[33867]: E0219 03:23:46.128249 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.229481 master-0 kubenswrapper[33867]: E0219 03:23:46.229421 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.261781 master-0 kubenswrapper[33867]: E0219 03:23:46.261706 33867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 03:23:46.329855 master-0 kubenswrapper[33867]: E0219 03:23:46.329801 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.432714 master-0 kubenswrapper[33867]: E0219 03:23:46.431569 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.532745 master-0 kubenswrapper[33867]: E0219 03:23:46.532688 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.633145 master-0 kubenswrapper[33867]: E0219 03:23:46.633056 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.734058 master-0 kubenswrapper[33867]: E0219 03:23:46.733912 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.834427 master-0 kubenswrapper[33867]: E0219 03:23:46.834352 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:46.897729 master-0 kubenswrapper[33867]: I0219 03:23:46.897647 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:46.928416 master-0 kubenswrapper[33867]: I0219 03:23:46.928305 33867 manager.go:324] Recovery completed Feb 19 03:23:46.934628 master-0 kubenswrapper[33867]: E0219 03:23:46.934564 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:47.030633 master-0 kubenswrapper[33867]: I0219 03:23:47.030509 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:47.034025 master-0 kubenswrapper[33867]: I0219 03:23:47.033995 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:47.034025 master-0 kubenswrapper[33867]: I0219 03:23:47.034023 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:47.034126 master-0 kubenswrapper[33867]: I0219 03:23:47.034032 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:47.034763 master-0 kubenswrapper[33867]: E0219 03:23:47.034674 33867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 19 03:23:47.039374 master-0 kubenswrapper[33867]: I0219 03:23:47.039338 33867 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 03:23:47.039374 master-0 kubenswrapper[33867]: I0219 03:23:47.039365 33867 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 03:23:47.039551 master-0 kubenswrapper[33867]: I0219 03:23:47.039407 33867 state_mem.go:36] "Initialized new in-memory state store" Feb 19 03:23:47.039694 master-0 kubenswrapper[33867]: I0219 03:23:47.039655 33867 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 19 03:23:47.039765 master-0 kubenswrapper[33867]: I0219 03:23:47.039683 33867 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 19 03:23:47.039765 master-0 kubenswrapper[33867]: I0219 03:23:47.039722 33867 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 19 03:23:47.039765 master-0 kubenswrapper[33867]: I0219 03:23:47.039735 33867 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 19 03:23:47.039765 master-0 kubenswrapper[33867]: I0219 03:23:47.039747 33867 policy_none.go:49] "None policy: Start" Feb 19 03:23:47.043846 master-0 kubenswrapper[33867]: I0219 03:23:47.043781 33867 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 03:23:47.044033 master-0 kubenswrapper[33867]: I0219 03:23:47.043856 33867 state_mem.go:35] "Initializing new in-memory state store" Feb 19 03:23:47.044220 master-0 kubenswrapper[33867]: I0219 03:23:47.044194 33867 state_mem.go:75] "Updated machine memory state" Feb 19 03:23:47.044399 master-0 kubenswrapper[33867]: I0219 03:23:47.044222 33867 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 19 03:23:47.065921 master-0 kubenswrapper[33867]: I0219 03:23:47.065866 33867 manager.go:334] "Starting Device Plugin manager" Feb 19 03:23:47.065921 master-0 kubenswrapper[33867]: I0219 03:23:47.065921 33867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 03:23:47.066312 master-0 kubenswrapper[33867]: I0219 03:23:47.065933 33867 server.go:79] "Starting device plugin registration server" Feb 19 03:23:47.066430 master-0 kubenswrapper[33867]: I0219 03:23:47.066358 33867 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 03:23:47.066430 master-0 kubenswrapper[33867]: I0219 03:23:47.066375 33867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 03:23:47.067117 master-0 kubenswrapper[33867]: I0219 03:23:47.066625 33867 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 03:23:47.067117 master-0 kubenswrapper[33867]: I0219 03:23:47.066971 33867 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 03:23:47.067117 master-0 kubenswrapper[33867]: I0219 03:23:47.066995 33867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 03:23:47.080444 master-0 kubenswrapper[33867]: E0219 03:23:47.080399 33867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:23:47.166524 master-0 kubenswrapper[33867]: I0219 03:23:47.166475 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:47.168963 master-0 kubenswrapper[33867]: I0219 03:23:47.168894 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:47.168963 master-0 kubenswrapper[33867]: I0219 03:23:47.168941 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:47.168963 master-0 kubenswrapper[33867]: I0219 03:23:47.168949 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:47.168963 master-0 kubenswrapper[33867]: I0219 03:23:47.168971 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:47.170051 master-0 kubenswrapper[33867]: E0219 03:23:47.169990 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:23:47.370934 master-0 kubenswrapper[33867]: I0219 03:23:47.370805 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:47.375025 master-0 kubenswrapper[33867]: I0219 03:23:47.374949 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:47.375025 master-0 kubenswrapper[33867]: I0219 03:23:47.375008 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:47.375025 master-0 kubenswrapper[33867]: I0219 03:23:47.375031 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:47.375323 master-0 kubenswrapper[33867]: I0219 03:23:47.375067 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:47.376212 master-0 kubenswrapper[33867]: E0219 03:23:47.376128 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:23:47.457148 master-0 kubenswrapper[33867]: I0219 03:23:47.457082 33867 generic.go:334] "Generic (PLEG): container finished" podID="76470062-ab83-47ed-a669-deeb71996548" containerID="882c525babc52c3119968e9793962f24892225613582692392aa79601c39660e" exitCode=0 Feb 19 03:23:47.777011 master-0 kubenswrapper[33867]: I0219 03:23:47.776940 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:47.779599 master-0 kubenswrapper[33867]: I0219 03:23:47.779557 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:47.779599 master-0 kubenswrapper[33867]: I0219 03:23:47.779594 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:47.779785 master-0 kubenswrapper[33867]: I0219 03:23:47.779607 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:47.779785 master-0 kubenswrapper[33867]: I0219 03:23:47.779628 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:47.780435 master-0 kubenswrapper[33867]: E0219 03:23:47.780386 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:23:47.898507 master-0 kubenswrapper[33867]: I0219 03:23:47.898432 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:48.581083 master-0 kubenswrapper[33867]: I0219 03:23:48.581013 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:48.584764 master-0 kubenswrapper[33867]: I0219 03:23:48.584715 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:48.584891 master-0 kubenswrapper[33867]: I0219 03:23:48.584779 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:48.584891 master-0 kubenswrapper[33867]: I0219 03:23:48.584804 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:48.584891 master-0 kubenswrapper[33867]: I0219 03:23:48.584840 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:48.586218 master-0 kubenswrapper[33867]: E0219 03:23:48.586161 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:23:48.898278 master-0 kubenswrapper[33867]: I0219 03:23:48.898187 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:49.897911 master-0 kubenswrapper[33867]: I0219 03:23:49.897821 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:50.187414 master-0 kubenswrapper[33867]: I0219 03:23:50.187246 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:50.189930 master-0 kubenswrapper[33867]: I0219 03:23:50.189829 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:50.189930 master-0 kubenswrapper[33867]: I0219 03:23:50.189920 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:50.189930 master-0 kubenswrapper[33867]: I0219 03:23:50.189937 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:50.190250 master-0 kubenswrapper[33867]: I0219 03:23:50.189970 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:50.191478 master-0 kubenswrapper[33867]: E0219 03:23:50.191413 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 19 03:23:50.623657 master-0 kubenswrapper[33867]: E0219 03:23:50.623461 33867 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:50.623657 master-0 kubenswrapper[33867]: E0219 03:23:50.623563 33867 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Feb 19 03:23:50.898914 master-0 kubenswrapper[33867]: I0219 03:23:50.898789 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:51.262966 master-0 kubenswrapper[33867]: I0219 03:23:51.262691 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 19 03:23:51.262966 master-0 kubenswrapper[33867]: I0219 03:23:51.262936 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.267945 master-0 kubenswrapper[33867]: I0219 03:23:51.267881 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.267945 master-0 kubenswrapper[33867]: I0219 03:23:51.267946 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.268174 master-0 kubenswrapper[33867]: I0219 03:23:51.267973 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.268248 master-0 kubenswrapper[33867]: I0219 03:23:51.268196 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.269867 master-0 kubenswrapper[33867]: I0219 03:23:51.269827 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.272306 master-0 kubenswrapper[33867]: I0219 03:23:51.272208 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.272306 master-0 kubenswrapper[33867]: I0219 03:23:51.272295 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.272517 master-0 kubenswrapper[33867]: I0219 03:23:51.272319 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.272517 master-0 kubenswrapper[33867]: I0219 03:23:51.272488 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.272989 master-0 kubenswrapper[33867]: I0219 03:23:51.272899 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.273091 master-0 kubenswrapper[33867]: I0219 03:23:51.273016 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.273523 master-0 kubenswrapper[33867]: I0219 03:23:51.273463 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.273673 master-0 kubenswrapper[33867]: I0219 03:23:51.273531 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.273673 master-0 kubenswrapper[33867]: I0219 03:23:51.273614 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.276645 master-0 kubenswrapper[33867]: I0219 03:23:51.276578 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.276645 master-0 kubenswrapper[33867]: I0219 03:23:51.276637 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.276916 master-0 kubenswrapper[33867]: I0219 03:23:51.276664 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.276916 master-0 kubenswrapper[33867]: I0219 03:23:51.276872 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.277127 master-0 kubenswrapper[33867]: I0219 03:23:51.277059 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.277235 master-0 kubenswrapper[33867]: I0219 03:23:51.277143 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.277796 master-0 kubenswrapper[33867]: I0219 03:23:51.277730 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.277796 master-0 kubenswrapper[33867]: I0219 03:23:51.277782 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.277796 master-0 kubenswrapper[33867]: I0219 03:23:51.277793 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.280896 master-0 kubenswrapper[33867]: I0219 03:23:51.280829 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.280896 master-0 kubenswrapper[33867]: I0219 03:23:51.280888 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.280912 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.280890 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.281029 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.281045 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.281107 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.281660 master-0 kubenswrapper[33867]: I0219 03:23:51.281311 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.285208 master-0 kubenswrapper[33867]: I0219 03:23:51.285150 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.285208 master-0 kubenswrapper[33867]: I0219 03:23:51.285178 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.285208 master-0 kubenswrapper[33867]: I0219 03:23:51.285189 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.285565 master-0 kubenswrapper[33867]: I0219 03:23:51.285310 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.285565 master-0 kubenswrapper[33867]: I0219 03:23:51.285349 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.285565 master-0 kubenswrapper[33867]: I0219 03:23:51.285373 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.285744 master-0 kubenswrapper[33867]: I0219 03:23:51.285621 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.285918 master-0 kubenswrapper[33867]: I0219 03:23:51.285855 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.290202 master-0 kubenswrapper[33867]: I0219 03:23:51.290038 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.290202 master-0 kubenswrapper[33867]: I0219 03:23:51.290097 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.290202 master-0 kubenswrapper[33867]: I0219 03:23:51.290137 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.290202 master-0 kubenswrapper[33867]: I0219 03:23:51.290161 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.290644 master-0 kubenswrapper[33867]: I0219 03:23:51.290105 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.290644 master-0 kubenswrapper[33867]: I0219 03:23:51.290284 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.290644 master-0 kubenswrapper[33867]: I0219 03:23:51.290377 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"14775efdcd2d21cfa5380cda6110ff7f11195c8d583c1e8fdfc52bf29df9ae57"} Feb 19 03:23:51.290644 master-0 kubenswrapper[33867]: I0219 03:23:51.290602 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"16c3b004c40d76193f576d53169fed6e918160d971015a8fa3ff49332f28fdc1"} Feb 19 03:23:51.290644 master-0 kubenswrapper[33867]: I0219 03:23:51.290639 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"02a9fcc4ca7dc26983cfaa637ce8ae712974956ca9517abc25074ce302bff7b2"} Feb 19 03:23:51.290883 master-0 kubenswrapper[33867]: I0219 03:23:51.290669 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"f5e6e05e3e1d9ed0d5a9bb682a401139471a5c8f7de416f435b323b01ece0b32"} Feb 19 03:23:51.290883 master-0 kubenswrapper[33867]: I0219 03:23:51.290696 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"0b461f34d367324dba43f9d8dc1f9f03674c68ca7ee50c7c17368a3d5dc7170e"} Feb 19 03:23:51.290883 master-0 kubenswrapper[33867]: I0219 03:23:51.290724 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:51.290883 master-0 kubenswrapper[33867]: I0219 03:23:51.290723 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"60f5cf312ba315b685c25de92b9f8cc980f0c49a86698d8a695e2b600355cacd"} Feb 19 03:23:51.291017 master-0 kubenswrapper[33867]: I0219 03:23:51.290901 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"49d6109e593a1f6854e4a23b0f0809b7c8251c11ffac6d5d3c63dd533a448342"} Feb 19 03:23:51.291017 master-0 kubenswrapper[33867]: I0219 03:23:51.290932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"6098282b64423ad9dddb84a69efced826ff8c34354a14bb5812b294431de3af7"} Feb 19 03:23:51.291076 master-0 kubenswrapper[33867]: I0219 03:23:51.291009 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="258078f280458482912939c3338c1981e998a321634b6785079948c05a69b5ce" Feb 19 03:23:51.291251 master-0 kubenswrapper[33867]: I0219 03:23:51.291216 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951494debcdd0ff7db2f410b57e8c2c9ed7b3f2e54fda90b5fd97c799ae6ccba" Feb 19 03:23:51.291336 master-0 kubenswrapper[33867]: I0219 03:23:51.291298 33867 scope.go:117] "RemoveContainer" containerID="047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" Feb 19 03:23:51.291373 master-0 kubenswrapper[33867]: I0219 03:23:51.291332 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5e253906f92c4bc553e34db5acf8d0406570aeec90b10b8f3c9cf4861917cb" Feb 19 03:23:51.291602 master-0 kubenswrapper[33867]: I0219 03:23:51.291567 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92fdf51cd372b585439674ddd7f835c72abd8cc5f202f350b7be96246769df8c" Feb 19 03:23:51.292706 master-0 kubenswrapper[33867]: I0219 03:23:51.291613 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676fe9b8803826897eb9069682463435a484f2265769bbfbab612ab166fcad61" Feb 19 03:23:51.292706 master-0 kubenswrapper[33867]: I0219 03:23:51.292563 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b"} Feb 19 03:23:51.292706 master-0 kubenswrapper[33867]: I0219 03:23:51.292599 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} Feb 19 03:23:51.292706 master-0 kubenswrapper[33867]: I0219 03:23:51.292678 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} Feb 19 03:23:51.293132 master-0 kubenswrapper[33867]: I0219 03:23:51.292767 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerDied","Data":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} Feb 19 03:23:51.293132 master-0 kubenswrapper[33867]: I0219 03:23:51.292848 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} Feb 19 03:23:51.293132 master-0 kubenswrapper[33867]: I0219 03:23:51.292878 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} Feb 19 03:23:51.293132 master-0 kubenswrapper[33867]: I0219 03:23:51.292900 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"50eac3d8c63234f2a49e98044c0d4f67","Type":"ContainerStarted","Data":"5506ac36fbaf2416aa135b7e1945e22b7c62738888b7f9b117791bba76b3408f"} Feb 19 03:23:51.293895 master-0 kubenswrapper[33867]: I0219 03:23:51.293832 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a446ea9c6c338c0be1b08a79588f504347b99fd5d06b7e02469e7d9756ac6f" Feb 19 03:23:51.293953 master-0 kubenswrapper[33867]: I0219 03:23:51.293929 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d86702a952f96c82b209454f5a8421f9f15531387895bfc549a591987747f66a" Feb 19 03:23:51.294033 master-0 kubenswrapper[33867]: I0219 03:23:51.294001 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="965cde5ffa11aa0f8a6be0fd409b2352a9feb606c803fa2badb9392fcad23cdd" Feb 19 03:23:51.294209 master-0 kubenswrapper[33867]: I0219 03:23:51.294137 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d175ae5ada68becfd99d3a7dbdac8119e2b0cc096867b19b4c6fd448c8d63692" Feb 19 03:23:51.294281 master-0 kubenswrapper[33867]: I0219 03:23:51.294206 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd5bff57449ca5fcd515236a8abe6e347dc3b6ea4ab8480dc9821e2c6351f26" Feb 19 03:23:51.294332 master-0 kubenswrapper[33867]: I0219 03:23:51.294247 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1228d47520fd6381632379d9feaf41bd2b10ef0de8e7df209689151b5f65fdeb" Feb 19 03:23:51.294505 master-0 kubenswrapper[33867]: I0219 03:23:51.294429 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5063a55beab9e17c44bf467460af64eb399204406812c9ae4e396f59fae30a15"} Feb 19 03:23:51.294505 master-0 kubenswrapper[33867]: I0219 03:23:51.294504 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53d32d6e913448c501ea08b87db55bb0233a108aad73fab0d0903446a3305ceb"} Feb 19 03:23:51.294614 master-0 kubenswrapper[33867]: I0219 03:23:51.294540 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"057cad626bcfaec41c462ca1ec27ee5d9cbc1905800d5d8b5f0df0e891b48ec8"} Feb 19 03:23:51.294614 master-0 kubenswrapper[33867]: I0219 03:23:51.294561 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"45290d8cb3535a5ff36152b9fe01c07e69311de28833ad29a7500dad8cb6fd55"} Feb 19 03:23:51.294614 master-0 kubenswrapper[33867]: I0219 03:23:51.294577 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2d484b07e94495906a9ef1c8f980fb107c93c95a40a52c0019224db82b51fc4d"} Feb 19 03:23:51.294614 master-0 kubenswrapper[33867]: I0219 03:23:51.294595 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"0cf7d392da6a301b93f30bcc03748c612e502b9e965838935f8e427396fbdf21"} Feb 19 03:23:51.294614 master-0 kubenswrapper[33867]: I0219 03:23:51.294613 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"d0fbcab1791c1fa93d0b8382e393526b12e53a1efcdb373eae2fce501c101408"} Feb 19 03:23:51.294797 master-0 kubenswrapper[33867]: I0219 03:23:51.294631 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327"} Feb 19 03:23:51.294797 master-0 kubenswrapper[33867]: I0219 03:23:51.294657 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"d4ec4e49d4dd98a02afe5ae82b828a0c598d3a1b8c49a3c9012f434a6bee2385"} Feb 19 03:23:51.294797 master-0 kubenswrapper[33867]: I0219 03:23:51.294675 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43"} Feb 19 03:23:51.295799 master-0 kubenswrapper[33867]: I0219 03:23:51.295747 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:51.295895 master-0 kubenswrapper[33867]: I0219 03:23:51.295806 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:51.295895 master-0 kubenswrapper[33867]: I0219 03:23:51.295821 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:51.319837 master-0 kubenswrapper[33867]: I0219 03:23:51.319739 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.320190 master-0 kubenswrapper[33867]: I0219 03:23:51.319842 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.320190 master-0 kubenswrapper[33867]: I0219 03:23:51.319945 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.320190 master-0 kubenswrapper[33867]: I0219 03:23:51.320001 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.320190 master-0 kubenswrapper[33867]: I0219 03:23:51.320054 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320213 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320306 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320362 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320412 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320464 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320513 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320589 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.320662 master-0 kubenswrapper[33867]: I0219 03:23:51.320636 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320689 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320738 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320795 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320865 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320916 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.320963 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.321486 master-0 kubenswrapper[33867]: I0219 03:23:51.321016 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.325494 master-0 kubenswrapper[33867]: I0219 03:23:51.325451 33867 scope.go:117] "RemoveContainer" containerID="047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" Feb 19 03:23:51.326025 master-0 kubenswrapper[33867]: E0219 03:23:51.325975 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366\": container with ID starting with 047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366 not found: ID does not exist" containerID="047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366" Feb 19 03:23:51.326128 master-0 kubenswrapper[33867]: I0219 03:23:51.326023 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366"} err="failed to get container status \"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366\": rpc error: code = NotFound desc = could not find container \"047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366\": container with ID starting with 047c725d5cc8e3c64314f516538214f52d69457a2ae59326ddd3a80d36fde366 not found: ID does not exist" Feb 19 03:23:51.423549 master-0 kubenswrapper[33867]: I0219 03:23:51.423429 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.423879 master-0 kubenswrapper[33867]: I0219 03:23:51.423619 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.423879 master-0 kubenswrapper[33867]: I0219 03:23:51.423780 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.423879 master-0 kubenswrapper[33867]: I0219 03:23:51.423865 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.423892 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.423987 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424026 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424050 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424063 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424108 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424151 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424160 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424246 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424242 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.424249 master-0 kubenswrapper[33867]: I0219 03:23:51.424247 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424368 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424485 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424541 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424539 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424613 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424672 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424708 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424739 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424769 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424792 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424802 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424843 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424877 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424882 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424930 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424965 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424968 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.424994 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.425029 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.425043 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.425039 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.425076 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.425313 master-0 kubenswrapper[33867]: I0219 03:23:51.425030 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.426697 master-0 kubenswrapper[33867]: I0219 03:23:51.425439 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.543544 master-0 kubenswrapper[33867]: E0219 03:23:51.543340 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 19 03:23:51.577113 master-0 kubenswrapper[33867]: I0219 03:23:51.575697 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.577113 master-0 kubenswrapper[33867]: I0219 03:23:51.575810 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 19 03:23:51.579679 master-0 kubenswrapper[33867]: I0219 03:23:51.579614 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:51.581993 master-0 kubenswrapper[33867]: I0219 03:23:51.581904 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:23:51.586876 master-0 kubenswrapper[33867]: I0219 03:23:51.586740 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.587032 master-0 kubenswrapper[33867]: I0219 03:23:51.586891 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.587032 master-0 kubenswrapper[33867]: I0219 03:23:51.586941 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.587032 master-0 kubenswrapper[33867]: I0219 03:23:51.586986 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.593045 master-0 kubenswrapper[33867]: I0219 03:23:51.592969 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.597415 master-0 kubenswrapper[33867]: I0219 03:23:51.594826 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:23:51.600080 master-0 kubenswrapper[33867]: I0219 03:23:51.600009 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:23:51.649409 master-0 kubenswrapper[33867]: W0219 03:23:51.648629 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c4f5d60772fa42f26e9c219bffa62b9.slice/crio-23971735fc83affff48fff9dd078366df2f158fdccc6f71677751d6451c6bc54 WatchSource:0}: Error finding container 23971735fc83affff48fff9dd078366df2f158fdccc6f71677751d6451c6bc54: Status 404 returned error can't find the container with id 23971735fc83affff48fff9dd078366df2f158fdccc6f71677751d6451c6bc54 Feb 19 03:23:51.649409 master-0 kubenswrapper[33867]: W0219 03:23:51.649316 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb342c942d3d92fd08ed7cf68fafb94c.slice/crio-f98e8bee1c4db6bc3d03a9ac56e34086b72adcdb2ac3108d7f331f89ffe28645 WatchSource:0}: Error finding container f98e8bee1c4db6bc3d03a9ac56e34086b72adcdb2ac3108d7f331f89ffe28645: Status 404 returned error can't find the container with id f98e8bee1c4db6bc3d03a9ac56e34086b72adcdb2ac3108d7f331f89ffe28645 Feb 19 03:23:51.898065 master-0 kubenswrapper[33867]: I0219 03:23:51.898021 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:52.507804 master-0 kubenswrapper[33867]: I0219 03:23:52.507710 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/4.log" Feb 19 03:23:52.509540 master-0 kubenswrapper[33867]: I0219 03:23:52.509083 33867 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="19a1f28fd6894887f54799dd664b3153aee457ecc2c8aab80e319ccb1bdbf8a2" exitCode=255 Feb 19 03:23:52.509540 master-0 kubenswrapper[33867]: I0219 03:23:52.509252 33867 scope.go:117] "RemoveContainer" containerID="6a5db57d3cdfa9709ab008271a7de8b76cb4f5beeb18f426e1c635fff0d68431" Feb 19 03:23:52.512939 master-0 kubenswrapper[33867]: I0219 03:23:52.512893 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/5.log" Feb 19 03:23:52.513636 master-0 kubenswrapper[33867]: I0219 03:23:52.513579 33867 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="028495f0aee3ee18d27a6df8f41026b434ac3c3d335cf96c6e2e88bafe3758a1" exitCode=255 Feb 19 03:23:52.516459 master-0 kubenswrapper[33867]: I0219 03:23:52.516387 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74" exitCode=0 Feb 19 03:23:52.516624 master-0 kubenswrapper[33867]: I0219 03:23:52.516544 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74"} Feb 19 03:23:52.516815 master-0 kubenswrapper[33867]: I0219 03:23:52.516747 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f98e8bee1c4db6bc3d03a9ac56e34086b72adcdb2ac3108d7f331f89ffe28645"} Feb 19 03:23:52.517082 master-0 kubenswrapper[33867]: I0219 03:23:52.517024 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.518689 master-0 kubenswrapper[33867]: I0219 03:23:52.518609 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76"} Feb 19 03:23:52.518807 master-0 kubenswrapper[33867]: I0219 03:23:52.518689 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"23971735fc83affff48fff9dd078366df2f158fdccc6f71677751d6451c6bc54"} Feb 19 03:23:52.518807 master-0 kubenswrapper[33867]: I0219 03:23:52.518790 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.521201 master-0 kubenswrapper[33867]: I0219 03:23:52.521130 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/3.log" Feb 19 03:23:52.521861 master-0 kubenswrapper[33867]: I0219 03:23:52.521802 33867 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="e103e135bf82f2eb93c3dbb2b40a81ffeb2314273026f2e9a0c0e8f111555646" exitCode=255 Feb 19 03:23:52.522413 master-0 kubenswrapper[33867]: I0219 03:23:52.522348 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.522413 master-0 kubenswrapper[33867]: I0219 03:23:52.522409 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.522546 master-0 kubenswrapper[33867]: I0219 03:23:52.522427 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.522546 master-0 kubenswrapper[33867]: I0219 03:23:52.522426 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.522546 master-0 kubenswrapper[33867]: I0219 03:23:52.522479 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.522546 master-0 kubenswrapper[33867]: I0219 03:23:52.522498 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.524500 master-0 kubenswrapper[33867]: I0219 03:23:52.524434 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/4.log" Feb 19 03:23:52.524500 master-0 kubenswrapper[33867]: I0219 03:23:52.524486 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.525669 master-0 kubenswrapper[33867]: I0219 03:23:52.525597 33867 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="987763106eeabe88cbdd191d01e6f39059ee96a02ef736bbdbea66f4d5635935" exitCode=255 Feb 19 03:23:52.525839 master-0 kubenswrapper[33867]: I0219 03:23:52.525807 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.526560 master-0 kubenswrapper[33867]: I0219 03:23:52.526509 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.526637 master-0 kubenswrapper[33867]: I0219 03:23:52.526569 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.526759 master-0 kubenswrapper[33867]: I0219 03:23:52.526696 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:52.529345 master-0 kubenswrapper[33867]: I0219 03:23:52.529218 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.529345 master-0 kubenswrapper[33867]: I0219 03:23:52.529308 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.529345 master-0 kubenswrapper[33867]: I0219 03:23:52.529332 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.534521 master-0 kubenswrapper[33867]: I0219 03:23:52.534165 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.534521 master-0 kubenswrapper[33867]: I0219 03:23:52.534449 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.534521 master-0 kubenswrapper[33867]: I0219 03:23:52.534469 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.538846 master-0 kubenswrapper[33867]: I0219 03:23:52.538669 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.538846 master-0 kubenswrapper[33867]: I0219 03:23:52.538750 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.538846 master-0 kubenswrapper[33867]: I0219 03:23:52.538823 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.539384 master-0 kubenswrapper[33867]: I0219 03:23:52.538862 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.539384 master-0 kubenswrapper[33867]: I0219 03:23:52.538888 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.539384 master-0 kubenswrapper[33867]: I0219 03:23:52.538913 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.540322 master-0 kubenswrapper[33867]: I0219 03:23:52.540269 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:52.540322 master-0 kubenswrapper[33867]: I0219 03:23:52.540319 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:52.540618 master-0 kubenswrapper[33867]: I0219 03:23:52.540334 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:52.566470 master-0 kubenswrapper[33867]: I0219 03:23:52.565926 33867 scope.go:117] "RemoveContainer" containerID="c545cf58bc696341c026f65428a1c9e4ca4d12c0673d4c492e30d1f60df08f53" Feb 19 03:23:52.614102 master-0 kubenswrapper[33867]: I0219 03:23:52.614046 33867 scope.go:117] "RemoveContainer" containerID="84d662dd4fdd1383970ef08334843ef9932b238a72433235bfdec45dfc41643e" Feb 19 03:23:52.663102 master-0 kubenswrapper[33867]: I0219 03:23:52.663052 33867 scope.go:117] "RemoveContainer" containerID="49ac40cd49fe9f544ea18cf9db242f3b1d372ceb484dc7cc80e9da742f93d130" Feb 19 03:23:52.898520 master-0 kubenswrapper[33867]: I0219 03:23:52.898448 33867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 19 03:23:53.391844 master-0 kubenswrapper[33867]: I0219 03:23:53.391757 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:53.402335 master-0 kubenswrapper[33867]: I0219 03:23:53.401519 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:53.402335 master-0 kubenswrapper[33867]: I0219 03:23:53.401562 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:53.402335 master-0 kubenswrapper[33867]: I0219 03:23:53.401571 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:53.402335 master-0 kubenswrapper[33867]: I0219 03:23:53.401593 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:23:53.535560 master-0 kubenswrapper[33867]: I0219 03:23:53.535389 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee"} Feb 19 03:23:53.535560 master-0 kubenswrapper[33867]: I0219 03:23:53.535442 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377"} Feb 19 03:23:53.535560 master-0 kubenswrapper[33867]: I0219 03:23:53.535452 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db"} Feb 19 03:23:53.537838 master-0 kubenswrapper[33867]: I0219 03:23:53.537762 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/3.log" Feb 19 03:23:53.539950 master-0 kubenswrapper[33867]: I0219 03:23:53.539925 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/4.log" Feb 19 03:23:53.541486 master-0 kubenswrapper[33867]: I0219 03:23:53.541446 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/4.log" Feb 19 03:23:53.543338 master-0 kubenswrapper[33867]: I0219 03:23:53.543045 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/5.log" Feb 19 03:23:53.543338 master-0 kubenswrapper[33867]: I0219 03:23:53.543201 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:53.554721 master-0 kubenswrapper[33867]: I0219 03:23:53.554688 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:53.554909 master-0 kubenswrapper[33867]: I0219 03:23:53.554728 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:53.554909 master-0 kubenswrapper[33867]: I0219 03:23:53.554737 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:54.555638 master-0 kubenswrapper[33867]: I0219 03:23:54.555594 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"70954c340299c804b789bfe49633d92c735fcd40dd36aa25a4a746ddc654f917"} Feb 19 03:23:54.555638 master-0 kubenswrapper[33867]: I0219 03:23:54.555638 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90"} Feb 19 03:23:54.556402 master-0 kubenswrapper[33867]: I0219 03:23:54.555777 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:54.559347 master-0 kubenswrapper[33867]: I0219 03:23:54.559303 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:54.559418 master-0 kubenswrapper[33867]: I0219 03:23:54.559379 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:54.559418 master-0 kubenswrapper[33867]: I0219 03:23:54.559393 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:55.564543 master-0 kubenswrapper[33867]: I0219 03:23:55.564467 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:55.565110 master-0 kubenswrapper[33867]: I0219 03:23:55.564644 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:55.567902 master-0 kubenswrapper[33867]: I0219 03:23:55.567859 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:55.568016 master-0 kubenswrapper[33867]: I0219 03:23:55.567911 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:55.568016 master-0 kubenswrapper[33867]: I0219 03:23:55.567929 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:56.571900 master-0 kubenswrapper[33867]: I0219 03:23:56.571805 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:56.575294 master-0 kubenswrapper[33867]: I0219 03:23:56.575190 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:56.575294 master-0 kubenswrapper[33867]: I0219 03:23:56.575282 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:56.575294 master-0 kubenswrapper[33867]: I0219 03:23:56.575296 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:56.580035 master-0 kubenswrapper[33867]: I0219 03:23:56.579988 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:56.580164 master-0 kubenswrapper[33867]: I0219 03:23:56.580027 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:56.587875 master-0 kubenswrapper[33867]: I0219 03:23:56.587834 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:57.085542 master-0 kubenswrapper[33867]: E0219 03:23:57.080736 33867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 19 03:23:57.580120 master-0 kubenswrapper[33867]: I0219 03:23:57.580035 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:57.584042 master-0 kubenswrapper[33867]: I0219 03:23:57.583996 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:57.584223 master-0 kubenswrapper[33867]: I0219 03:23:57.584059 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:57.584223 master-0 kubenswrapper[33867]: I0219 03:23:57.584081 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:57.585485 master-0 kubenswrapper[33867]: I0219 03:23:57.585431 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:23:58.585701 master-0 kubenswrapper[33867]: I0219 03:23:58.585660 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:58.588471 master-0 kubenswrapper[33867]: I0219 03:23:58.588447 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:58.588550 master-0 kubenswrapper[33867]: I0219 03:23:58.588487 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:58.588550 master-0 kubenswrapper[33867]: I0219 03:23:58.588502 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:58.852705 master-0 kubenswrapper[33867]: I0219 03:23:58.852537 33867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 03:23:59.021780 master-0 kubenswrapper[33867]: E0219 03:23:59.021684 33867 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 19 03:23:59.022037 master-0 kubenswrapper[33867]: I0219 03:23:59.021924 33867 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 03:23:59.595173 master-0 kubenswrapper[33867]: I0219 03:23:59.595106 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:23:59.600195 master-0 kubenswrapper[33867]: I0219 03:23:59.600131 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:23:59.600195 master-0 kubenswrapper[33867]: I0219 03:23:59.600189 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:23:59.600420 master-0 kubenswrapper[33867]: I0219 03:23:59.600208 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:23:59.766572 master-0 kubenswrapper[33867]: I0219 03:23:59.766502 33867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 03:24:00.208318 master-0 kubenswrapper[33867]: I0219 03:24:00.208179 33867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 03:24:00.342802 master-0 kubenswrapper[33867]: I0219 03:24:00.342732 33867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 03:24:00.605884 master-0 kubenswrapper[33867]: I0219 03:24:00.605754 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 19 03:24:00.608441 master-0 kubenswrapper[33867]: I0219 03:24:00.608407 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="70954c340299c804b789bfe49633d92c735fcd40dd36aa25a4a746ddc654f917" exitCode=255 Feb 19 03:24:00.608576 master-0 kubenswrapper[33867]: I0219 03:24:00.608451 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"70954c340299c804b789bfe49633d92c735fcd40dd36aa25a4a746ddc654f917"} Feb 19 03:24:00.623695 master-0 kubenswrapper[33867]: I0219 03:24:00.623608 33867 scope.go:117] "RemoveContainer" containerID="70954c340299c804b789bfe49633d92c735fcd40dd36aa25a4a746ddc654f917" Feb 19 03:24:00.913179 master-0 kubenswrapper[33867]: I0219 03:24:00.913100 33867 apiserver.go:52] "Watching apiserver" Feb 19 03:24:00.938788 master-0 kubenswrapper[33867]: I0219 03:24:00.938725 33867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 03:24:00.942348 master-0 kubenswrapper[33867]: I0219 03:24:00.942218 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q","openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk","openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l","openshift-marketplace/redhat-marketplace-nqnbc","openshift-marketplace/redhat-operators-v9c2b","openshift-network-operator/iptables-alerter-kvvll","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-network-operator/network-operator-7d7db75979-jbztp","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v","openshift-dns-operator/dns-operator-8c7d49845-jlnvw","openshift-etcd/installer-1-master-0","openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt","openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq","openshift-monitoring/kube-state-metrics-59584d565f-m7mdb","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5","openshift-machine-config-operator/machine-config-server-m64bf","openshift-monitoring/prometheus-operator-754bc4d665-tkbxr","openshift-cluster-node-tuning-operator/tuned-4jl4c","openshift-insights/insights-operator-59b498fcfb-2dvkr","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc","openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h","openshift-service-ca/service-ca-576b4d78bd-92gqk","openshift-ingress-operator/ingress-operator-6569778c84-qcd49","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj","openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l","openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874","openshift-etcd/installer-2-master-0","openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h","openshift-ovn-kubernetes/ovnkube-node-pw7dx","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc","assisted-installer/assisted-installer-controller-tw8v2","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb","openshift-dns/dns-default-clndn","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92","openshift-multus/multus-4lzdj","openshift-dns/node-resolver-4qvfn","openshift-ingress/router-default-7b65dc9fcb-t6jnq","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9","openshift-monitoring/metrics-server-68d9f4c46b-mh59n","openshift-marketplace/marketplace-operator-6f5488b997-xxdh5","openshift-apiserver/apiserver-957b9456f-f5s8c","openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7","openshift-multus/network-metrics-daemon-hspwc","openshift-network-node-identity/network-node-identity-rm5jg","openshift-cluster-version/cluster-version-operator-57476485-qjgq9","openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh","openshift-kube-controller-manager/installer-2-retry-1-master-0","openshift-multus/multus-additional-cni-plugins-bs5qd","openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p","openshift-kube-apiserver/installer-1-master-0","openshift-monitoring/node-exporter-8g26m","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-machine-config-operator/machine-config-daemon-j2wxd","openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd","openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7","openshift-ingress-canary/ingress-canary-bbwkg","openshift-network-diagnostics/network-check-target-c6c25","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9","openshift-kube-scheduler/installer-4-retry-1-master-0","openshift-kube-scheduler/installer-5-retry-1-master-0","openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g","openshift-marketplace/community-operators-nrcnx","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7","openshift-marketplace/certified-operators-5t9dd"] Feb 19 03:24:00.942716 master-0 kubenswrapper[33867]: I0219 03:24:00.942664 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tw8v2" Feb 19 03:24:00.945994 master-0 kubenswrapper[33867]: I0219 03:24:00.945927 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 03:24:00.946547 master-0 kubenswrapper[33867]: I0219 03:24:00.946506 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 03:24:00.950940 master-0 kubenswrapper[33867]: I0219 03:24:00.950889 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 03:24:00.956170 master-0 kubenswrapper[33867]: I0219 03:24:00.951532 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 03:24:00.956170 master-0 kubenswrapper[33867]: I0219 03:24:00.952154 33867 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="009e56e8-3ee1-4208-b099-958ed2bf1c90" Feb 19 03:24:00.956170 master-0 kubenswrapper[33867]: I0219 03:24:00.952320 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 03:24:00.956170 master-0 kubenswrapper[33867]: I0219 03:24:00.954072 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 19 03:24:00.960918 master-0 kubenswrapper[33867]: I0219 03:24:00.958188 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 03:24:00.960918 master-0 kubenswrapper[33867]: I0219 03:24:00.958362 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 03:24:00.960918 master-0 kubenswrapper[33867]: I0219 03:24:00.958461 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 03:24:00.960918 master-0 kubenswrapper[33867]: I0219 03:24:00.958752 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 03:24:00.963247 master-0 kubenswrapper[33867]: I0219 03:24:00.963150 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 19 03:24:00.963423 master-0 kubenswrapper[33867]: I0219 03:24:00.963303 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 19 03:24:00.964757 master-0 kubenswrapper[33867]: I0219 03:24:00.963488 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 19 03:24:00.965107 master-0 kubenswrapper[33867]: I0219 03:24:00.965078 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 03:24:00.970334 master-0 kubenswrapper[33867]: I0219 03:24:00.970302 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 03:24:00.970428 master-0 kubenswrapper[33867]: I0219 03:24:00.970395 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 03:24:00.970428 master-0 kubenswrapper[33867]: I0219 03:24:00.970422 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.970823 master-0 kubenswrapper[33867]: I0219 03:24:00.970791 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.970865 master-0 kubenswrapper[33867]: I0219 03:24:00.970824 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.970934 master-0 kubenswrapper[33867]: I0219 03:24:00.970893 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 03:24:00.971042 master-0 kubenswrapper[33867]: I0219 03:24:00.971017 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.971079 master-0 kubenswrapper[33867]: I0219 03:24:00.971037 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971138 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971190 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971356 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971041 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971539 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971702 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 03:24:00.971774 master-0 kubenswrapper[33867]: I0219 03:24:00.971733 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.972088 master-0 kubenswrapper[33867]: I0219 03:24:00.971791 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 03:24:00.972088 master-0 kubenswrapper[33867]: I0219 03:24:00.971952 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 03:24:00.972088 master-0 kubenswrapper[33867]: I0219 03:24:00.971968 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 03:24:00.972088 master-0 kubenswrapper[33867]: I0219 03:24:00.972030 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 03:24:00.972247 master-0 kubenswrapper[33867]: I0219 03:24:00.972120 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 03:24:00.972247 master-0 kubenswrapper[33867]: I0219 03:24:00.972142 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:24:00.972247 master-0 kubenswrapper[33867]: I0219 03:24:00.972152 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 03:24:00.972247 master-0 kubenswrapper[33867]: I0219 03:24:00.972238 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.972684 master-0 kubenswrapper[33867]: I0219 03:24:00.972637 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 03:24:00.973525 master-0 kubenswrapper[33867]: I0219 03:24:00.973498 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 03:24:00.973614 master-0 kubenswrapper[33867]: I0219 03:24:00.973594 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.973689 master-0 kubenswrapper[33867]: I0219 03:24:00.973665 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 19 03:24:00.973776 master-0 kubenswrapper[33867]: I0219 03:24:00.973719 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 03:24:00.973823 master-0 kubenswrapper[33867]: I0219 03:24:00.973776 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.973823 master-0 kubenswrapper[33867]: I0219 03:24:00.973808 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 03:24:00.973887 master-0 kubenswrapper[33867]: I0219 03:24:00.973870 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 03:24:00.973973 master-0 kubenswrapper[33867]: I0219 03:24:00.973947 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 03:24:00.974090 master-0 kubenswrapper[33867]: I0219 03:24:00.974069 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 03:24:00.974213 master-0 kubenswrapper[33867]: I0219 03:24:00.974195 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 03:24:00.974344 master-0 kubenswrapper[33867]: I0219 03:24:00.974327 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.974379 master-0 kubenswrapper[33867]: I0219 03:24:00.974370 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 03:24:00.974412 master-0 kubenswrapper[33867]: I0219 03:24:00.974398 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 19 03:24:00.974473 master-0 kubenswrapper[33867]: I0219 03:24:00.974458 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 03:24:00.974504 master-0 kubenswrapper[33867]: I0219 03:24:00.973734 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 19 03:24:00.974569 master-0 kubenswrapper[33867]: I0219 03:24:00.974551 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.974607 master-0 kubenswrapper[33867]: I0219 03:24:00.974597 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 03:24:00.974705 master-0 kubenswrapper[33867]: I0219 03:24:00.974688 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 03:24:00.974747 master-0 kubenswrapper[33867]: I0219 03:24:00.974732 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 03:24:00.974963 master-0 kubenswrapper[33867]: I0219 03:24:00.974937 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 03:24:00.975009 master-0 kubenswrapper[33867]: I0219 03:24:00.974973 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 03:24:00.975009 master-0 kubenswrapper[33867]: I0219 03:24:00.975002 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:24:00.975097 master-0 kubenswrapper[33867]: I0219 03:24:00.975077 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 03:24:00.975173 master-0 kubenswrapper[33867]: I0219 03:24:00.975156 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:24:00.975243 master-0 kubenswrapper[33867]: I0219 03:24:00.975229 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 03:24:00.975355 master-0 kubenswrapper[33867]: I0219 03:24:00.975338 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 03:24:00.976296 master-0 kubenswrapper[33867]: I0219 03:24:00.976272 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt" Feb 19 03:24:00.976532 master-0 kubenswrapper[33867]: I0219 03:24:00.976506 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 03:24:00.976569 master-0 kubenswrapper[33867]: I0219 03:24:00.976499 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 03:24:00.976569 master-0 kubenswrapper[33867]: I0219 03:24:00.976554 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 03:24:00.976738 master-0 kubenswrapper[33867]: I0219 03:24:00.976708 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.976814 master-0 kubenswrapper[33867]: I0219 03:24:00.976779 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 03:24:00.977027 master-0 kubenswrapper[33867]: I0219 03:24:00.977004 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 03:24:00.977244 master-0 kubenswrapper[33867]: I0219 03:24:00.977213 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 03:24:00.977312 master-0 kubenswrapper[33867]: I0219 03:24:00.977259 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 19 03:24:00.977437 master-0 kubenswrapper[33867]: I0219 03:24:00.977417 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 19 03:24:00.977835 master-0 kubenswrapper[33867]: I0219 03:24:00.977809 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:24:00.978189 master-0 kubenswrapper[33867]: I0219 03:24:00.978164 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 03:24:00.978887 master-0 kubenswrapper[33867]: I0219 03:24:00.978864 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 03:24:00.978958 master-0 kubenswrapper[33867]: I0219 03:24:00.978937 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 03:24:00.979109 master-0 kubenswrapper[33867]: I0219 03:24:00.979091 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 03:24:00.979203 master-0 kubenswrapper[33867]: I0219 03:24:00.979186 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 03:24:00.979241 master-0 kubenswrapper[33867]: I0219 03:24:00.979209 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 03:24:00.979241 master-0 kubenswrapper[33867]: I0219 03:24:00.979222 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 03:24:00.979369 master-0 kubenswrapper[33867]: I0219 03:24:00.979354 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 03:24:00.979484 master-0 kubenswrapper[33867]: I0219 03:24:00.979463 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 03:24:00.979545 master-0 kubenswrapper[33867]: I0219 03:24:00.979499 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 03:24:00.979593 master-0 kubenswrapper[33867]: I0219 03:24:00.979544 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 03:24:00.979593 master-0 kubenswrapper[33867]: I0219 03:24:00.979589 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 19 03:24:00.979684 master-0 kubenswrapper[33867]: I0219 03:24:00.979507 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 03:24:00.979727 master-0 kubenswrapper[33867]: I0219 03:24:00.979704 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 03:24:00.979798 master-0 kubenswrapper[33867]: I0219 03:24:00.979773 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 03:24:00.979937 master-0 kubenswrapper[33867]: I0219 03:24:00.979875 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 03:24:00.979986 master-0 kubenswrapper[33867]: I0219 03:24:00.979942 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 03:24:00.980072 master-0 kubenswrapper[33867]: I0219 03:24:00.980053 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 03:24:00.980149 master-0 kubenswrapper[33867]: I0219 03:24:00.980122 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 03:24:00.980467 master-0 kubenswrapper[33867]: I0219 03:24:00.980375 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 03:24:00.980536 master-0 kubenswrapper[33867]: I0219 03:24:00.980467 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 03:24:00.980582 master-0 kubenswrapper[33867]: I0219 03:24:00.980553 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 03:24:00.980811 master-0 kubenswrapper[33867]: I0219 03:24:00.980786 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 03:24:00.980934 master-0 kubenswrapper[33867]: I0219 03:24:00.980912 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 03:24:00.981012 master-0 kubenswrapper[33867]: I0219 03:24:00.980988 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 03:24:00.981235 master-0 kubenswrapper[33867]: I0219 03:24:00.981219 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 03:24:00.981962 master-0 kubenswrapper[33867]: I0219 03:24:00.981922 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 19 03:24:00.982173 master-0 kubenswrapper[33867]: I0219 03:24:00.982124 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 19 03:24:00.982380 master-0 kubenswrapper[33867]: I0219 03:24:00.982190 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 19 03:24:00.982380 master-0 kubenswrapper[33867]: I0219 03:24:00.982216 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 19 03:24:00.983473 master-0 kubenswrapper[33867]: I0219 03:24:00.983444 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:00.990548 master-0 kubenswrapper[33867]: I0219 03:24:00.989591 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 03:24:00.991491 master-0 kubenswrapper[33867]: I0219 03:24:00.991465 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 03:24:00.992479 master-0 kubenswrapper[33867]: I0219 03:24:00.992428 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 03:24:00.993956 master-0 kubenswrapper[33867]: I0219 03:24:00.993905 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 03:24:00.994245 master-0 kubenswrapper[33867]: I0219 03:24:00.994211 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 19 03:24:00.999178 master-0 kubenswrapper[33867]: I0219 03:24:00.999121 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 03:24:01.000239 master-0 kubenswrapper[33867]: I0219 03:24:01.000072 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 19 03:24:01.004301 master-0 kubenswrapper[33867]: I0219 03:24:01.004273 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 03:24:01.021635 master-0 kubenswrapper[33867]: I0219 03:24:01.021578 33867 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 19 03:24:01.026379 master-0 kubenswrapper[33867]: I0219 03:24:01.026345 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 03:24:01.037080 master-0 kubenswrapper[33867]: I0219 03:24:01.036998 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:01.037289 master-0 kubenswrapper[33867]: I0219 03:24:01.037208 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.037368 master-0 kubenswrapper[33867]: I0219 03:24:01.037319 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.037532 master-0 kubenswrapper[33867]: I0219 03:24:01.037362 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.037532 master-0 kubenswrapper[33867]: I0219 03:24:01.037453 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.037532 master-0 kubenswrapper[33867]: I0219 03:24:01.037480 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037606 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037647 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037662 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-daemon-config\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037667 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037748 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037779 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:01.037810 master-0 kubenswrapper[33867]: I0219 03:24:01.037811 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.037834 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.037861 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-srv-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.037861 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.037959 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.037987 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:01.038031 master-0 kubenswrapper[33867]: I0219 03:24:01.038011 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038042 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038043 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038098 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038114 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ec677f3d-06c4-4cf4-9f24-69894b9a9118-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038121 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038155 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038186 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:01.038229 master-0 kubenswrapper[33867]: I0219 03:24:01.038213 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038242 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038277 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038288 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038298 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfd9\" (UniqueName: \"kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038326 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038351 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038376 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq48l\" (UniqueName: \"kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038406 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038450 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9ed390-3b62-4b81-8c03-0c579a4a686a-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038465 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038486 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038506 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:01.038527 master-0 kubenswrapper[33867]: I0219 03:24:01.038220 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f7d8fc8-c313-416f-b62b-b54db9944066-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038565 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-binary-copy\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038629 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038692 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038715 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038746 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038809 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038834 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038861 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:01.038974 master-0 kubenswrapper[33867]: I0219 03:24:01.038922 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.038995 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039027 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039047 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-whereabouts-configmap\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039060 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz8r\" (UniqueName: \"kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039086 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039110 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039142 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2btm8\" (UniqueName: \"kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039167 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039188 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039207 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039226 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghmn\" (UniqueName: \"kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039246 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039299 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039318 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039336 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.039371 master-0 kubenswrapper[33867]: I0219 03:24:01.039379 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039407 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039427 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039445 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039462 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039480 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039499 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039515 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039533 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039549 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039571 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/80c48134-cb22-4cf9-b076-ce39af2f4113-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039579 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039659 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd6c\" (UniqueName: \"kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039727 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039761 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwp5\" (UniqueName: \"kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039778 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039796 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039818 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039834 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.039931 master-0 kubenswrapper[33867]: I0219 03:24:01.039667 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9ed390-3b62-4b81-8c03-0c579a4a686a-config\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.039957 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-script-lib\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.039975 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040027 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040045 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040068 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d54aa-5f71-4a82-8e71-401ed3083a13-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040088 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040144 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040167 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040182 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-catalog-content\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040186 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040224 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040272 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040295 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040305 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040314 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040354 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040378 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2v2\" (UniqueName: \"kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040396 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040417 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040501 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-key\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040643 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-tmpfs\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:01.040821 master-0 kubenswrapper[33867]: I0219 03:24:01.040669 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4714ef51-2d24-4938-8c58-80c1485a368b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041005 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041081 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041044 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff96ce8-6427-4a42-afa6-8b8bc778f094-metrics-tls\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041105 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041147 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041166 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041184 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041217 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041219 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041273 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-catalog-content\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041297 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041316 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041335 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxmv\" (UniqueName: \"kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041380 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.041476 master-0 kubenswrapper[33867]: I0219 03:24:01.041453 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041646 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041680 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041707 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041733 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041756 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041782 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041810 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041835 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2vc\" (UniqueName: \"kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041862 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041889 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041915 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041939 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041974 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.041982 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.042012 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.042044 master-0 kubenswrapper[33867]: I0219 03:24:01.042044 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042071 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042099 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042125 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042148 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwbk\" (UniqueName: \"kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042171 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-env-overrides\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042171 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042209 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042228 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042358 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-tmp\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042416 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.042557 master-0 kubenswrapper[33867]: I0219 03:24:01.042545 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edc7410-417a-4e55-9276-ac271fd52297-config\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042598 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042620 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042622 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-config\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042715 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042751 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042776 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042780 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7012676e-f35d-46e5-83e8-a63172dd076e-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042862 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.042909 master-0 kubenswrapper[33867]: I0219 03:24:01.042893 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.043221 master-0 kubenswrapper[33867]: I0219 03:24:01.042940 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:01.043221 master-0 kubenswrapper[33867]: I0219 03:24:01.043022 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-tuned\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.043221 master-0 kubenswrapper[33867]: I0219 03:24:01.043065 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clddw\" (UniqueName: \"kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.043221 master-0 kubenswrapper[33867]: I0219 03:24:01.043100 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.043221 master-0 kubenswrapper[33867]: I0219 03:24:01.043144 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:01.043394 master-0 kubenswrapper[33867]: I0219 03:24:01.043241 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.043394 master-0 kubenswrapper[33867]: I0219 03:24:01.043297 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:01.043457 master-0 kubenswrapper[33867]: I0219 03:24:01.043402 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:01.043457 master-0 kubenswrapper[33867]: I0219 03:24:01.043426 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.043457 master-0 kubenswrapper[33867]: I0219 03:24:01.043447 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043486 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043508 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99z6r\" (UniqueName: \"kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043513 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58c6f5a2-c0a8-4636-a057-cedbe0151579-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043528 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043547 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.043552 master-0 kubenswrapper[33867]: I0219 03:24:01.043546 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59746bb-7d76-4fd7-8323-5b92be63afb9-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.043729 master-0 kubenswrapper[33867]: I0219 03:24:01.043643 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-utilities\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:01.043729 master-0 kubenswrapper[33867]: I0219 03:24:01.043702 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.043807 master-0 kubenswrapper[33867]: I0219 03:24:01.043728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:01.043807 master-0 kubenswrapper[33867]: I0219 03:24:01.043751 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:01.043807 master-0 kubenswrapper[33867]: I0219 03:24:01.043806 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.043911 master-0 kubenswrapper[33867]: I0219 03:24:01.043826 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:01.043911 master-0 kubenswrapper[33867]: I0219 03:24:01.043830 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.043977 master-0 kubenswrapper[33867]: I0219 03:24:01.043952 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/546cf649-8e0d-4c8a-a197-412db42e36b6-utilities\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:01.044007 master-0 kubenswrapper[33867]: I0219 03:24:01.043981 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:01.044007 master-0 kubenswrapper[33867]: I0219 03:24:01.043997 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15a571c6-7c47-4b57-bc5b-e46544a114c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.044007 master-0 kubenswrapper[33867]: I0219 03:24:01.044008 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.044112 master-0 kubenswrapper[33867]: I0219 03:24:01.044074 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.044112 master-0 kubenswrapper[33867]: I0219 03:24:01.044096 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.044203 master-0 kubenswrapper[33867]: I0219 03:24:01.044115 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.044238 master-0 kubenswrapper[33867]: I0219 03:24:01.044222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.044300 master-0 kubenswrapper[33867]: I0219 03:24:01.044243 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.044300 master-0 kubenswrapper[33867]: I0219 03:24:01.044284 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.044367 master-0 kubenswrapper[33867]: I0219 03:24:01.044307 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:01.044367 master-0 kubenswrapper[33867]: I0219 03:24:01.044330 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.044367 master-0 kubenswrapper[33867]: I0219 03:24:01.044336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b9d54aa-5f71-4a82-8e71-401ed3083a13-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:01.044367 master-0 kubenswrapper[33867]: I0219 03:24:01.044349 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.044482 master-0 kubenswrapper[33867]: I0219 03:24:01.044431 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.044482 master-0 kubenswrapper[33867]: I0219 03:24:01.044466 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.044542 master-0 kubenswrapper[33867]: I0219 03:24:01.044497 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.044542 master-0 kubenswrapper[33867]: I0219 03:24:01.044527 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044557 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044587 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044637 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044671 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044699 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044643 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a52be87c-e707-4269-96da-537708d52b64-webhook-cert\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044808 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044862 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044895 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhlw\" (UniqueName: \"kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044926 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044937 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044952 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.044980 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045009 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045038 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-client\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045041 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc87d\" (UniqueName: \"kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045090 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045124 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045148 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045180 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045207 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045233 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045341 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-env-overrides\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045535 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-serving-cert\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045542 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovnkube-config\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045530 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3edc7410-417a-4e55-9276-ac271fd52297-serving-cert\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4lz\" (UniqueName: \"kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045591 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78j6f\" (UniqueName: \"kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045650 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045759 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ae2cbe0-aa0a-4f26-994b-660fb962d995-metrics-certs\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045800 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2wm\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045831 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045878 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045904 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045950 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:01.045933 master-0 kubenswrapper[33867]: I0219 03:24:01.045979 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046022 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-srv-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046039 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046070 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046109 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046133 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zww\" (UniqueName: \"kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046171 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9cb4a-5249-4116-a2e5-caa7859e2075-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046174 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046212 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvxh\" (UniqueName: \"kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh\") pod \"csi-snapshot-controller-6847bb4785-6trsd\" (UID: \"c8f325fb-0075-4a18-ba7e-669ab19bc91a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046296 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046312 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a59746bb-7d76-4fd7-8323-5b92be63afb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046317 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgff\" (UniqueName: \"kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046401 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cni-binary-copy\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046439 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz72\" (UniqueName: \"kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046531 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046564 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046583 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046686 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046722 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhg6\" (UniqueName: \"kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046783 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046859 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046896 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046924 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.046965 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047007 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047045 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047074 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047100 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047171 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/decd8c56-e0f0-4119-917f-56652c8f8372-iptables-alerter-script\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047194 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50a2aec-7ed0-4114-8b25-19579fe931cb-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:01.047224 master-0 kubenswrapper[33867]: I0219 03:24:01.047220 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047271 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047301 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047324 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047351 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047375 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047270 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac5423-b231-44e5-9545-424d635ed6ee-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047394 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-utilities\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047397 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047451 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk722\" (UniqueName: \"kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047477 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047500 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047521 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047498 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047544 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-4ms92\" (UID: \"ed2b5ced-d986-4622-9e0a-d39363629408\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047668 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047690 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047707 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047727 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047905 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047931 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047953 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4z8t\" (UniqueName: \"kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047974 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.047994 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.048017 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.048038 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.048057 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.048061 master-0 kubenswrapper[33867]: I0219 03:24:01.048076 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048095 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zxf\" (UniqueName: \"kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048116 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048134 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048153 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048175 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048195 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048214 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048219 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-textfile\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048232 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048265 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048285 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048286 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/a52be87c-e707-4269-96da-537708d52b64-ovnkube-identity-cm\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048303 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048300 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67f4e002-26fb-41e3-abdb-f4928b6c561f-metrics-tls\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048338 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048361 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048380 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfgk\" (UniqueName: \"kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk\") pod \"network-check-source-58fb6744f5-mh46g\" (UID: \"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048405 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048422 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff96ce8-6427-4a42-afa6-8b8bc778f094-trusted-ca\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048427 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048479 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048483 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-snapshots\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048408 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048505 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-serving-cert\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048511 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048504 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048455 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048547 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048596 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048632 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048693 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048709 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4714ef51-2d24-4938-8c58-80c1485a368b-config\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048743 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048776 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwjp\" (UniqueName: \"kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048796 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfcl\" (UniqueName: \"kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048870 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.048872 master-0 kubenswrapper[33867]: I0219 03:24:01.048903 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.048933 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.048961 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9cb4a-5249-4116-a2e5-caa7859e2075-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.048966 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049001 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049032 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049061 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049090 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049115 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049145 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmbc\" (UniqueName: \"kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049202 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-metrics-tls\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049236 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049294 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049338 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049357 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/80c48134-cb22-4cf9-b076-ce39af2f4113-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049377 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049406 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049564 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049592 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b283bd8e-3339-4701-ae3c-f009e498b7d4-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049604 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049636 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049664 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049747 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76529f4c-70b1-4fcb-ba48-ae929228f9fc-catalog-content\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049784 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049812 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049869 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18b29e37-cda9-41a8-a910-3d8f74be3cf3-signing-cabundle\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049901 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hzx\" (UniqueName: \"kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049932 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msl9t\" (UniqueName: \"kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049956 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm2l\" (UniqueName: \"kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l\") pod \"migrator-5c85bff57-85d6g\" (UID: \"c4ed0c32-c13f-4c72-b83f-9af19b2950a3\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.049975 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.050035 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.050032 master-0 kubenswrapper[33867]: I0219 03:24:01.050056 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050134 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050156 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050179 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050203 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050229 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050277 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkxh\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050311 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050278 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabc3c9b-ed58-4fd4-8735-65d504fa299a-catalog-content\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050337 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-etcd-ca\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050337 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050445 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050484 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050496 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-serving-cert\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050507 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894cz\" (UniqueName: \"kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050518 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-config\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050485 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-config\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050530 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050598 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca82f2e9-884e-49d1-9863-e87212d01edc-utilities\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050666 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050689 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050707 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050747 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050768 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050811 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050843 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9hn\" (UniqueName: \"kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050871 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15a571c6-7c47-4b57-bc5b-e46544a114c8-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:01.051128 master-0 kubenswrapper[33867]: I0219 03:24:01.050996 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-ovn-node-metrics-cert\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.065574 master-0 kubenswrapper[33867]: I0219 03:24:01.065224 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 03:24:01.086416 master-0 kubenswrapper[33867]: I0219 03:24:01.086346 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 03:24:01.104936 master-0 kubenswrapper[33867]: I0219 03:24:01.104868 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 03:24:01.124895 master-0 kubenswrapper[33867]: I0219 03:24:01.124784 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 03:24:01.127206 master-0 kubenswrapper[33867]: I0219 03:24:01.127169 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-image-import-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.145551 master-0 kubenswrapper[33867]: I0219 03:24:01.145503 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 03:24:01.153395 master-0 kubenswrapper[33867]: I0219 03:24:01.153313 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-client\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.153623 master-0 kubenswrapper[33867]: I0219 03:24:01.153588 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.153678 master-0 kubenswrapper[33867]: I0219 03:24:01.153624 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.153678 master-0 kubenswrapper[33867]: I0219 03:24:01.153656 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.153782 master-0 kubenswrapper[33867]: I0219 03:24:01.153691 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-bin\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.153782 master-0 kubenswrapper[33867]: I0219 03:24:01.153734 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.153782 master-0 kubenswrapper[33867]: I0219 03:24:01.153735 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b137033-0db2-46c9-a526-f8234345e883-rootfs\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.153782 master-0 kubenswrapper[33867]: I0219 03:24:01.153770 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.154012 master-0 kubenswrapper[33867]: I0219 03:24:01.153828 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-multus\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.154012 master-0 kubenswrapper[33867]: I0219 03:24:01.153890 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.154012 master-0 kubenswrapper[33867]: I0219 03:24:01.153915 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.154012 master-0 kubenswrapper[33867]: I0219 03:24:01.153975 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154050 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154063 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154088 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154161 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154185 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:24:01.154323 master-0 kubenswrapper[33867]: I0219 03:24:01.154240 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-cni-bin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154336 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154344 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67624ad2-babb-4b0e-9599-99325c286b22-hosts-file\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154387 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-dir\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154485 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154596 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154603 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.154704 master-0 kubenswrapper[33867]: I0219 03:24:01.154646 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154726 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-sys\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154790 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154805 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-cni-netd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154843 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154882 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154910 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/af2be4f9-f632-4a72-8f39-c96954403edc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.154975 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.155045 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.155072 master-0 kubenswrapper[33867]: I0219 03:24:01.155061 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-netns\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155105 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155133 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155157 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155158 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/decd8c56-e0f0-4119-917f-56652c8f8372-host-slash\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155205 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysconfig\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155214 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-host\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155216 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-var-lib-kubelet\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155297 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155333 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"installer-3-master-0\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155364 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155388 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155395 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155425 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155447 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-run-netns\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155460 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.155471 master-0 kubenswrapper[33867]: I0219 03:24:01.155473 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-kubernetes\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155489 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155499 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-socket-dir-parent\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155533 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155588 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155607 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155634 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155668 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-modprobe-d\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155705 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-cnibin\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155746 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155771 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-var-lib-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155816 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155865 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155879 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155886 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-var-lib-kubelet\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155895 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155937 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155953 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155976 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.155976 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-k8s-cni-cncf-io\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.156004 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-slash\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.156047 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156072 master-0 kubenswrapper[33867]: I0219 03:24:01.156062 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-os-release\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156093 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156113 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-kubelet\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156276 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156305 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-systemd\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156338 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-cnibin\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156381 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156422 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156454 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8f7d8fc8-c313-416f-b62b-b54db9944066-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156465 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-etc-openvswitch\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156505 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-system-cni-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156528 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156549 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156566 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156582 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156614 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156630 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156674 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-ovn\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156685 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/61abb34a-08f0-4438-9a89-c712b2048878-etc-ssl-certs\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156708 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-lib-modules\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156773 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-etc-sysctl-conf\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156782 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156823 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-node-pullsecrets\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.156842 master-0 kubenswrapper[33867]: I0219 03:24:01.156862 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.156917 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.156941 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-node-log\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.156945 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.156972 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-multus-conf-dir\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.156982 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157008 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-system-cni-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157026 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-sys\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157063 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157121 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157161 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-log-socket\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157246 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c569676a-51dd-418c-87a5-719c18fe4c95-audit-dir\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157284 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157313 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157374 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-hostroot\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157384 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157408 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157427 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-systemd-units\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157412 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/78702d1c-b5ab-4e00-92da-cb2513a72024-run\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157455 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-run-systemd\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157512 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157580 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157619 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-host-run-multus-certs\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157638 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157706 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157721 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-etc-kubernetes\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157758 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.157772 master-0 kubenswrapper[33867]: I0219 03:24:01.157786 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157797 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157830 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7012676e-f35d-46e5-83e8-a63172dd076e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157839 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-host-etc-kube\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157786 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-wtmp\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157930 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.157978 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.158075 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-root\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:01.159248 master-0 kubenswrapper[33867]: I0219 03:24:01.158088 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-os-release\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:01.165789 master-0 kubenswrapper[33867]: I0219 03:24:01.165754 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 03:24:01.168682 master-0 kubenswrapper[33867]: I0219 03:24:01.168635 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-serving-cert\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.187088 master-0 kubenswrapper[33867]: I0219 03:24:01.187005 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 03:24:01.190854 master-0 kubenswrapper[33867]: I0219 03:24:01.190817 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c569676a-51dd-418c-87a5-719c18fe4c95-encryption-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.210833 master-0 kubenswrapper[33867]: I0219 03:24:01.210582 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 03:24:01.218489 master-0 kubenswrapper[33867]: I0219 03:24:01.218433 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-trusted-ca-bundle\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.224702 master-0 kubenswrapper[33867]: I0219 03:24:01.224646 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 03:24:01.244869 master-0 kubenswrapper[33867]: I0219 03:24:01.244805 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 03:24:01.265139 master-0 kubenswrapper[33867]: I0219 03:24:01.265025 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 03:24:01.272391 master-0 kubenswrapper[33867]: I0219 03:24:01.272342 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-config\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.285324 master-0 kubenswrapper[33867]: I0219 03:24:01.285248 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 03:24:01.289246 master-0 kubenswrapper[33867]: I0219 03:24:01.289195 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-audit\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.304511 master-0 kubenswrapper[33867]: I0219 03:24:01.304469 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 03:24:01.311013 master-0 kubenswrapper[33867]: I0219 03:24:01.310969 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c569676a-51dd-418c-87a5-719c18fe4c95-etcd-serving-ca\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:01.346394 master-0 kubenswrapper[33867]: I0219 03:24:01.346331 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 03:24:01.351831 master-0 kubenswrapper[33867]: I0219 03:24:01.351789 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75c58162-a0ba-40f4-8894-38f17dc2fb6d-metrics-tls\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:01.365045 master-0 kubenswrapper[33867]: I0219 03:24:01.364985 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 03:24:01.366369 master-0 kubenswrapper[33867]: I0219 03:24:01.366241 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c58162-a0ba-40f4-8894-38f17dc2fb6d-config-volume\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:01.385165 master-0 kubenswrapper[33867]: I0219 03:24:01.385039 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 03:24:01.404841 master-0 kubenswrapper[33867]: I0219 03:24:01.404770 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 03:24:01.410725 master-0 kubenswrapper[33867]: I0219 03:24:01.410684 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-default-certificate\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.425202 master-0 kubenswrapper[33867]: I0219 03:24:01.425135 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 03:24:01.432972 master-0 kubenswrapper[33867]: I0219 03:24:01.432922 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-stats-auth\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.444893 master-0 kubenswrapper[33867]: I0219 03:24:01.444802 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 03:24:01.453614 master-0 kubenswrapper[33867]: I0219 03:24:01.453548 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76470062-ab83-47ed-a669-deeb71996548-metrics-certs\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.464474 master-0 kubenswrapper[33867]: I0219 03:24:01.464419 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 03:24:01.469616 master-0 kubenswrapper[33867]: I0219 03:24:01.469563 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76470062-ab83-47ed-a669-deeb71996548-service-ca-bundle\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:01.485601 master-0 kubenswrapper[33867]: I0219 03:24:01.485545 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 03:24:01.506165 master-0 kubenswrapper[33867]: I0219 03:24:01.506106 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 03:24:01.524660 master-0 kubenswrapper[33867]: I0219 03:24:01.524607 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 19 03:24:01.533298 master-0 kubenswrapper[33867]: I0219 03:24:01.528104 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/ed2b5ced-d986-4622-9e0a-d39363629408-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-4ms92\" (UID: \"ed2b5ced-d986-4622-9e0a-d39363629408\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:24:01.555399 master-0 kubenswrapper[33867]: I0219 03:24:01.555306 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 19 03:24:01.561360 master-0 kubenswrapper[33867]: I0219 03:24:01.561318 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33bb562f-84e7-4fcb-b008-416c09a5ecf0-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:01.564399 master-0 kubenswrapper[33867]: I0219 03:24:01.564357 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 03:24:01.569569 master-0 kubenswrapper[33867]: I0219 03:24:01.569524 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/59cea4cb-6374-49b6-97b3-d8a19cc1860f-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:24:01.586042 master-0 kubenswrapper[33867]: I0219 03:24:01.585629 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 03:24:01.604469 master-0 kubenswrapper[33867]: I0219 03:24:01.604370 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 03:24:01.616914 master-0 kubenswrapper[33867]: I0219 03:24:01.616871 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 19 03:24:01.624597 master-0 kubenswrapper[33867]: I0219 03:24:01.624540 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 03:24:01.652450 master-0 kubenswrapper[33867]: I0219 03:24:01.645413 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 03:24:01.666337 master-0 kubenswrapper[33867]: I0219 03:24:01.666238 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 19 03:24:01.669658 master-0 kubenswrapper[33867]: I0219 03:24:01.669589 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33bb562f-84e7-4fcb-b008-416c09a5ecf0-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:01.684520 master-0 kubenswrapper[33867]: I0219 03:24:01.684475 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 19 03:24:01.689740 master-0 kubenswrapper[33867]: I0219 03:24:01.689684 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-config\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.706854 master-0 kubenswrapper[33867]: I0219 03:24:01.704359 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 19 03:24:01.712533 master-0 kubenswrapper[33867]: I0219 03:24:01.709247 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.725653 master-0 kubenswrapper[33867]: I0219 03:24:01.725590 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 19 03:24:01.733257 master-0 kubenswrapper[33867]: I0219 03:24:01.733195 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af5828ea-090f-4c8f-90e6-c4e405e69ec5-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.745764 master-0 kubenswrapper[33867]: I0219 03:24:01.745726 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 19 03:24:01.751685 master-0 kubenswrapper[33867]: I0219 03:24:01.750181 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af5828ea-090f-4c8f-90e6-c4e405e69ec5-images\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:01.765775 master-0 kubenswrapper[33867]: I0219 03:24:01.765718 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 03:24:01.770989 master-0 kubenswrapper[33867]: I0219 03:24:01.770933 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-proxy-tls\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.785741 master-0 kubenswrapper[33867]: I0219 03:24:01.785694 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 03:24:01.805683 master-0 kubenswrapper[33867]: I0219 03:24:01.805591 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 03:24:01.809684 master-0 kubenswrapper[33867]: I0219 03:24:01.809628 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.809819 master-0 kubenswrapper[33867]: I0219 03:24:01.809713 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bab5125-f4d7-4940-891f-9bb6a2145fac-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:01.816801 master-0 kubenswrapper[33867]: I0219 03:24:01.816737 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b137033-0db2-46c9-a526-f8234345e883-mcd-auth-proxy-config\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:01.824967 master-0 kubenswrapper[33867]: I0219 03:24:01.824876 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 03:24:01.845677 master-0 kubenswrapper[33867]: I0219 03:24:01.845538 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 03:24:01.848641 master-0 kubenswrapper[33867]: I0219 03:24:01.848603 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-images\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:01.871650 master-0 kubenswrapper[33867]: I0219 03:24:01.871587 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 19 03:24:01.876796 master-0 kubenswrapper[33867]: I0219 03:24:01.876756 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:01.894396 master-0 kubenswrapper[33867]: I0219 03:24:01.894321 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 19 03:24:01.906503 master-0 kubenswrapper[33867]: I0219 03:24:01.906335 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:01.906503 master-0 kubenswrapper[33867]: I0219 03:24:01.906448 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 03:24:01.913968 master-0 kubenswrapper[33867]: I0219 03:24:01.913908 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0664d88f-f697-4182-93cd-f208ff6f3ac2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:24:01.925672 master-0 kubenswrapper[33867]: I0219 03:24:01.925603 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 19 03:24:01.945118 master-0 kubenswrapper[33867]: I0219 03:24:01.945072 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 19 03:24:01.949074 master-0 kubenswrapper[33867]: I0219 03:24:01.949027 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7012676e-f35d-46e5-83e8-a63172dd076e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:01.969841 master-0 kubenswrapper[33867]: I0219 03:24:01.969778 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 19 03:24:01.975846 master-0 kubenswrapper[33867]: I0219 03:24:01.975788 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") pod \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " Feb 19 03:24:01.975931 master-0 kubenswrapper[33867]: I0219 03:24:01.975894 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") pod \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " Feb 19 03:24:01.975994 master-0 kubenswrapper[33867]: I0219 03:24:01.975946 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3fab5bbd-672c-4e18-9c1e-438e2360bc54" (UID: "3fab5bbd-672c-4e18-9c1e-438e2360bc54"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:01.976036 master-0 kubenswrapper[33867]: I0219 03:24:01.976009 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock" (OuterVolumeSpecName: "var-lock") pod "3fab5bbd-672c-4e18-9c1e-438e2360bc54" (UID: "3fab5bbd-672c-4e18-9c1e-438e2360bc54"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:01.979716 master-0 kubenswrapper[33867]: I0219 03:24:01.979659 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:01.979716 master-0 kubenswrapper[33867]: I0219 03:24:01.979713 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fab5bbd-672c-4e18-9c1e-438e2360bc54-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:01.983731 master-0 kubenswrapper[33867]: I0219 03:24:01.983680 33867 request.go:700] Waited for 1.019258122s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 19 03:24:01.985951 master-0 kubenswrapper[33867]: I0219 03:24:01.985903 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 03:24:02.005512 master-0 kubenswrapper[33867]: I0219 03:24:02.005427 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 19 03:24:02.012502 master-0 kubenswrapper[33867]: I0219 03:24:02.012429 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:02.025041 master-0 kubenswrapper[33867]: I0219 03:24:02.024981 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 19 03:24:02.040478 master-0 kubenswrapper[33867]: E0219 03:24:02.040396 33867 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.040478 master-0 kubenswrapper[33867]: E0219 03:24:02.040451 33867 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.040793 master-0 kubenswrapper[33867]: E0219 03:24:02.040454 33867 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.040793 master-0 kubenswrapper[33867]: E0219 03:24:02.040493 33867 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.040793 master-0 kubenswrapper[33867]: E0219 03:24:02.040569 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert podName:2576028c-40d8-4ef4-ba41-a5aff01f2ed3 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.54054183 +0000 UTC m=+47.837212441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert") pod "packageserver-7d77f88776-s4jxm" (UID: "2576028c-40d8-4ef4-ba41-a5aff01f2ed3") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.040793 master-0 kubenswrapper[33867]: E0219 03:24:02.040595 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.540581801 +0000 UTC m=+47.837252412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.040793 master-0 kubenswrapper[33867]: E0219 03:24:02.040474 33867 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040865 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle podName:5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.540760656 +0000 UTC m=+47.837431287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle") pod "insights-operator-59b498fcfb-2dvkr" (UID: "5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040483 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040963 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles podName:06898300-c6e2-4d64-9ebf-d20f4338cccc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.540941971 +0000 UTC m=+47.837612592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles") pod "controller-manager-7b74b5f84f-v8ldx" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041031 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle podName:5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541011443 +0000 UTC m=+47.837682074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle") pod "insights-operator-59b498fcfb-2dvkr" (UID: "5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040500 33867 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040491 33867 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041105 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541047484 +0000 UTC m=+47.837718105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040493 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041135 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token podName:7ca08cc0-cc64-4e13-9465-c9b0bfacb60d nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541122696 +0000 UTC m=+47.837793317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token") pod "machine-config-server-m64bf" (UID: "7ca08cc0-cc64-4e13-9465-c9b0bfacb60d") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040405 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041195 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541146657 +0000 UTC m=+47.837817278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041235 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541220909 +0000 UTC m=+47.837891540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.040504 33867 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.041307 master-0 kubenswrapper[33867]: E0219 03:24:02.041309 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap podName:ec677f3d-06c4-4cf4-9f24-69894b9a9118 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541293391 +0000 UTC m=+47.837964012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-m7mdb" (UID: "ec677f3d-06c4-4cf4-9f24-69894b9a9118") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041358 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images podName:af2be4f9-f632-4a72-8f39-c96954403edc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541336122 +0000 UTC m=+47.838006753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" (UID: "af2be4f9-f632-4a72-8f39-c96954403edc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.040515 33867 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041384 33867 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041422 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config podName:6acd115e-71e1-4a50-8892-fc6ea2927fec nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541404554 +0000 UTC m=+47.838075185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config") pod "route-controller-manager-895bf76d5-65vdk" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.040529 33867 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041469 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config podName:06898300-c6e2-4d64-9ebf-d20f4338cccc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541460226 +0000 UTC m=+47.838130847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config") pod "controller-manager-7b74b5f84f-v8ldx" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.040890 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041488 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541477546 +0000 UTC m=+47.838148167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041528 33867 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041531 33867 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041569 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca podName:8ec16b3a-5d5c-46fe-87f0-89f93a2775ed nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541530098 +0000 UTC m=+47.838200719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca") pod "node-exporter-8g26m" (UID: "8ec16b3a-5d5c-46fe-87f0-89f93a2775ed") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041606 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert podName:a676c43c-4e0a-4826-86c1-288260611b09 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.54158845 +0000 UTC m=+47.838259081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert") pod "ingress-canary-bbwkg" (UID: "a676c43c-4e0a-4826-86c1-288260611b09") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041633 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls podName:7b137033-0db2-46c9-a526-f8234345e883 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.54161916 +0000 UTC m=+47.838289791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls") pod "machine-config-daemon-j2wxd" (UID: "7b137033-0db2-46c9-a526-f8234345e883") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041674 33867 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041724 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541705903 +0000 UTC m=+47.838376534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041895 33867 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.041998 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert podName:5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.541975511 +0000 UTC m=+47.838646162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert") pod "insights-operator-59b498fcfb-2dvkr" (UID: "5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.042075 master-0 kubenswrapper[33867]: E0219 03:24:02.042056 33867 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042116 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert podName:2576028c-40d8-4ef4-ba41-a5aff01f2ed3 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.542101804 +0000 UTC m=+47.838772455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert") pod "packageserver-7d77f88776-s4jxm" (UID: "2576028c-40d8-4ef4-ba41-a5aff01f2ed3") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042189 33867 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042293 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca podName:06898300-c6e2-4d64-9ebf-d20f4338cccc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.542244818 +0000 UTC m=+47.838915469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca") pod "controller-manager-7b74b5f84f-v8ldx" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042465 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042472 33867 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042501 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042572 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.542535396 +0000 UTC m=+47.839206087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042671 33867 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.043184 master-0 kubenswrapper[33867]: E0219 03:24:02.042677 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.542636119 +0000 UTC m=+47.839306770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043755 master-0 kubenswrapper[33867]: E0219 03:24:02.043443 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls podName:8ec16b3a-5d5c-46fe-87f0-89f93a2775ed nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.543428071 +0000 UTC m=+47.840098762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls") pod "node-exporter-8g26m" (UID: "8ec16b3a-5d5c-46fe-87f0-89f93a2775ed") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.043755 master-0 kubenswrapper[33867]: E0219 03:24:02.043499 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca podName:43560ec3-3526-40e1-aeb7-e3137a99171d nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.543462352 +0000 UTC m=+47.840133003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca") pod "openshift-state-metrics-6dbff8cb4c-4ccjj" (UID: "43560ec3-3526-40e1-aeb7-e3137a99171d") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.043755 master-0 kubenswrapper[33867]: E0219 03:24:02.043542 33867 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.043755 master-0 kubenswrapper[33867]: E0219 03:24:02.043652 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.543628157 +0000 UTC m=+47.840298808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.044791 master-0 kubenswrapper[33867]: I0219 03:24:02.044761 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045552 33867 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045585 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045624 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545605683 +0000 UTC m=+47.842276314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045641 33867 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045651 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545637814 +0000 UTC m=+47.842308435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045680 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545666505 +0000 UTC m=+47.842337136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045713 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045745 33867 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045757 33867 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045753 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca podName:ec677f3d-06c4-4cf4-9f24-69894b9a9118 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545742447 +0000 UTC m=+47.842413178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca") pod "kube-state-metrics-59584d565f-m7mdb" (UID: "ec677f3d-06c4-4cf4-9f24-69894b9a9118") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045809 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545798188 +0000 UTC m=+47.842468819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045826 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config podName:e2e81865-21fa-4e35-a870-738c13ac5b70 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545817969 +0000 UTC m=+47.842488600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-754bc4d665-tkbxr" (UID: "e2e81865-21fa-4e35-a870-738c13ac5b70") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045893 33867 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045902 33867 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045927 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545917972 +0000 UTC m=+47.842588593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.046182 master-0 kubenswrapper[33867]: E0219 03:24:02.045948 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs podName:7be6f9b5-fe27-4df5-b933-63bbb12f680c nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.545935792 +0000 UTC m=+47.842606523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs") pod "multus-admission-controller-5f54bf67d4-9zr4h" (UID: "7be6f9b5-fe27-4df5-b933-63bbb12f680c") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047234 master-0 kubenswrapper[33867]: E0219 03:24:02.046453 33867 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047234 master-0 kubenswrapper[33867]: E0219 03:24:02.046504 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert podName:6acd115e-71e1-4a50-8892-fc6ea2927fec nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.546490278 +0000 UTC m=+47.843160999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert") pod "route-controller-manager-895bf76d5-65vdk" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047696 master-0 kubenswrapper[33867]: E0219 03:24:02.047659 33867 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047750 master-0 kubenswrapper[33867]: E0219 03:24:02.047697 33867 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047750 master-0 kubenswrapper[33867]: E0219 03:24:02.047716 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config podName:8ec16b3a-5d5c-46fe-87f0-89f93a2775ed nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.547702632 +0000 UTC m=+47.844373333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config") pod "node-exporter-8g26m" (UID: "8ec16b3a-5d5c-46fe-87f0-89f93a2775ed") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047750 master-0 kubenswrapper[33867]: E0219 03:24:02.047748 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs podName:7ca08cc0-cc64-4e13-9465-c9b0bfacb60d nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.547732803 +0000 UTC m=+47.844403504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs") pod "machine-config-server-m64bf" (UID: "7ca08cc0-cc64-4e13-9465-c9b0bfacb60d") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.047880 master-0 kubenswrapper[33867]: E0219 03:24:02.047759 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.047880 master-0 kubenswrapper[33867]: E0219 03:24:02.047795 33867 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.047880 master-0 kubenswrapper[33867]: E0219 03:24:02.047799 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.547787944 +0000 UTC m=+47.844458655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.047880 master-0 kubenswrapper[33867]: E0219 03:24:02.047842 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca podName:6acd115e-71e1-4a50-8892-fc6ea2927fec nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.547833186 +0000 UTC m=+47.844503807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca") pod "route-controller-manager-895bf76d5-65vdk" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.048981 master-0 kubenswrapper[33867]: E0219 03:24:02.048944 33867 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049057 master-0 kubenswrapper[33867]: E0219 03:24:02.049017 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert podName:858a717b-a44e-4b8d-9974-7451a89cf104 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549004389 +0000 UTC m=+47.845675060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-p2hfn" (UID: "858a717b-a44e-4b8d-9974-7451a89cf104") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049057 master-0 kubenswrapper[33867]: I0219 03:24:02.048953 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61abb34a-08f0-4438-9a89-c712b2048878-service-ca\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:02.049147 master-0 kubenswrapper[33867]: E0219 03:24:02.049069 33867 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049147 master-0 kubenswrapper[33867]: E0219 03:24:02.049114 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls podName:43560ec3-3526-40e1-aeb7-e3137a99171d nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549102671 +0000 UTC m=+47.845773372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-4ccjj" (UID: "43560ec3-3526-40e1-aeb7-e3137a99171d") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049147 master-0 kubenswrapper[33867]: E0219 03:24:02.049112 33867 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049147 master-0 kubenswrapper[33867]: E0219 03:24:02.049137 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-b5da4s4ugo88o: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049159 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config podName:af2be4f9-f632-4a72-8f39-c96954403edc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549147643 +0000 UTC m=+47.845818364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" (UID: "af2be4f9-f632-4a72-8f39-c96954403edc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049176 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549168043 +0000 UTC m=+47.845838664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049193 33867 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049209 33867 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049230 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549219275 +0000 UTC m=+47.845889956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049268 33867 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049276 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert podName:61abb34a-08f0-4438-9a89-c712b2048878 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549242805 +0000 UTC m=+47.845913526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert") pod "cluster-version-operator-57476485-qjgq9" (UID: "61abb34a-08f0-4438-9a89-c712b2048878") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049297 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config podName:92804daf-1fd0-4008-afff-4f9bc362990b nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549289107 +0000 UTC m=+47.845959828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-tlhpc" (UID: "92804daf-1fd0-4008-afff-4f9bc362990b") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049334 33867 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049344 master-0 kubenswrapper[33867]: E0219 03:24:02.049335 33867 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049928 master-0 kubenswrapper[33867]: E0219 03:24:02.049379 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls podName:1bab5125-f4d7-4940-891f-9bb6a2145fac nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549371089 +0000 UTC m=+47.846041710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls") pod "machine-config-controller-54cb48566c-5t75l" (UID: "1bab5125-f4d7-4940-891f-9bb6a2145fac") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049928 master-0 kubenswrapper[33867]: E0219 03:24:02.049399 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config podName:ec677f3d-06c4-4cf4-9f24-69894b9a9118 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.54938993 +0000 UTC m=+47.846060551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-m7mdb" (UID: "ec677f3d-06c4-4cf4-9f24-69894b9a9118") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049928 master-0 kubenswrapper[33867]: E0219 03:24:02.049404 33867 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.049928 master-0 kubenswrapper[33867]: E0219 03:24:02.049465 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert podName:06898300-c6e2-4d64-9ebf-d20f4338cccc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.549437141 +0000 UTC m=+47.846107802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert") pod "controller-manager-7b74b5f84f-v8ldx" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050417 master-0 kubenswrapper[33867]: E0219 03:24:02.050360 33867 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050417 master-0 kubenswrapper[33867]: E0219 03:24:02.050410 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert podName:494087b2-b532-4c62-89d5-b88a152fa5db nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.550400658 +0000 UTC m=+47.847071269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f94476f49-dnfs9" (UID: "494087b2-b532-4c62-89d5-b88a152fa5db") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050620 master-0 kubenswrapper[33867]: E0219 03:24:02.050590 33867 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050741 master-0 kubenswrapper[33867]: E0219 03:24:02.050629 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls podName:af2be4f9-f632-4a72-8f39-c96954403edc nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.550621364 +0000 UTC m=+47.847291975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" (UID: "af2be4f9-f632-4a72-8f39-c96954403edc") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050741 master-0 kubenswrapper[33867]: E0219 03:24:02.050671 33867 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050741 master-0 kubenswrapper[33867]: E0219 03:24:02.050737 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls podName:255784ad-b52a-4c5c-ad15-278865ee2ccb nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.550728607 +0000 UTC m=+47.847399278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-prbs7" (UID: "255784ad-b52a-4c5c-ad15-278865ee2ccb") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.050936 master-0 kubenswrapper[33867]: E0219 03:24:02.050769 33867 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.050936 master-0 kubenswrapper[33867]: E0219 03:24:02.050827 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies podName:ace60ebd-e405-4fd2-96fe-7b16a9e11a07 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.55080799 +0000 UTC m=+47.847478611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies") pod "apiserver-85f97c6ffb-qfcnk" (UID: "ace60ebd-e405-4fd2-96fe-7b16a9e11a07") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:02.051453 master-0 kubenswrapper[33867]: E0219 03:24:02.051427 33867 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.051453 master-0 kubenswrapper[33867]: E0219 03:24:02.051447 33867 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.051581 master-0 kubenswrapper[33867]: E0219 03:24:02.051484 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls podName:ec677f3d-06c4-4cf4-9f24-69894b9a9118 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.551473168 +0000 UTC m=+47.848143859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-m7mdb" (UID: "ec677f3d-06c4-4cf4-9f24-69894b9a9118") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.051581 master-0 kubenswrapper[33867]: E0219 03:24:02.051513 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config podName:43560ec3-3526-40e1-aeb7-e3137a99171d nodeName:}" failed. No retries permitted until 2026-02-19 03:24:02.551498859 +0000 UTC m=+47.848169670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-4ccjj" (UID: "43560ec3-3526-40e1-aeb7-e3137a99171d") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:02.064619 master-0 kubenswrapper[33867]: I0219 03:24:02.064379 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 03:24:02.088726 master-0 kubenswrapper[33867]: I0219 03:24:02.086461 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 03:24:02.104932 master-0 kubenswrapper[33867]: I0219 03:24:02.104863 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 03:24:02.124985 master-0 kubenswrapper[33867]: I0219 03:24:02.124928 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 03:24:02.145232 master-0 kubenswrapper[33867]: I0219 03:24:02.145166 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 03:24:02.166500 master-0 kubenswrapper[33867]: I0219 03:24:02.164805 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 03:24:02.187404 master-0 kubenswrapper[33867]: I0219 03:24:02.185778 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 03:24:02.205173 master-0 kubenswrapper[33867]: I0219 03:24:02.205122 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 03:24:02.225828 master-0 kubenswrapper[33867]: I0219 03:24:02.225561 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 03:24:02.244969 master-0 kubenswrapper[33867]: I0219 03:24:02.244893 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 03:24:02.265792 master-0 kubenswrapper[33867]: I0219 03:24:02.265737 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 19 03:24:02.285027 master-0 kubenswrapper[33867]: I0219 03:24:02.284974 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 19 03:24:02.306478 master-0 kubenswrapper[33867]: I0219 03:24:02.306240 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 19 03:24:02.325410 master-0 kubenswrapper[33867]: I0219 03:24:02.325313 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 19 03:24:02.344911 master-0 kubenswrapper[33867]: I0219 03:24:02.344860 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 19 03:24:02.374424 master-0 kubenswrapper[33867]: I0219 03:24:02.374349 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 19 03:24:02.393803 master-0 kubenswrapper[33867]: I0219 03:24:02.393728 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 19 03:24:02.405300 master-0 kubenswrapper[33867]: I0219 03:24:02.405225 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 19 03:24:02.424872 master-0 kubenswrapper[33867]: I0219 03:24:02.424775 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 19 03:24:02.445398 master-0 kubenswrapper[33867]: I0219 03:24:02.445362 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 03:24:02.465286 master-0 kubenswrapper[33867]: I0219 03:24:02.465234 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 03:24:02.485122 master-0 kubenswrapper[33867]: I0219 03:24:02.485068 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 03:24:02.505613 master-0 kubenswrapper[33867]: I0219 03:24:02.505545 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 19 03:24:02.525101 master-0 kubenswrapper[33867]: I0219 03:24:02.525043 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 03:24:02.544599 master-0 kubenswrapper[33867]: I0219 03:24:02.544513 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mrjgz" Feb 19 03:24:02.565738 master-0 kubenswrapper[33867]: I0219 03:24:02.565675 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 03:24:02.584984 master-0 kubenswrapper[33867]: I0219 03:24:02.584915 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7rwgg" Feb 19 03:24:02.600452 master-0 kubenswrapper[33867]: I0219 03:24:02.600366 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600467 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600528 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600594 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600627 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600650 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600687 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600713 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600738 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/494087b2-b532-4c62-89d5-b88a152fa5db-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600762 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:02.600774 master-0 kubenswrapper[33867]: I0219 03:24:02.600789 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.600847 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.600878 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.600945 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.600977 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601018 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601027 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/255784ad-b52a-4c5c-ad15-278865ee2ccb-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601057 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601117 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601145 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601179 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601227 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-audit-policies\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601249 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601309 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601376 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601402 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601432 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601457 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601506 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601533 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-service-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.601549 master-0 kubenswrapper[33867]: I0219 03:24:02.601564 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601594 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601633 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601707 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-images\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601731 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601779 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601825 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601851 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601883 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601911 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601964 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.601974 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.602369 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255784ad-b52a-4c5c-ad15-278865ee2ccb-config\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.602434 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-serving-cert\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.602600 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-apiservice-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.602648 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-serving-cert\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:02.603059 master-0 kubenswrapper[33867]: I0219 03:24:02.603026 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-encryption-config\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603334 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-webhook-cert\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603410 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-trusted-ca-bundle\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603420 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b137033-0db2-46c9-a526-f8234345e883-proxy-tls\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603773 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603844 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603879 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-client\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.603990 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.604029 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.604077 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:02.604132 master-0 kubenswrapper[33867]: I0219 03:24:02.604125 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604161 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/858a717b-a44e-4b8d-9974-7451a89cf104-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604325 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604362 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604444 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604493 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604524 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604546 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604583 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604620 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604708 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604754 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-5msgd" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604777 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604836 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:02.604890 master-0 kubenswrapper[33867]: I0219 03:24:02.604856 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/858a717b-a44e-4b8d-9974-7451a89cf104-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:02.605904 master-0 kubenswrapper[33867]: I0219 03:24:02.605065 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-etcd-serving-ca\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:02.605904 master-0 kubenswrapper[33867]: I0219 03:24:02.605089 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abb34a-08f0-4438-9a89-c712b2048878-serving-cert\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:02.625347 master-0 kubenswrapper[33867]: I0219 03:24:02.625289 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-g7dwh" Feb 19 03:24:02.627276 master-0 kubenswrapper[33867]: I0219 03:24:02.627221 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 19 03:24:02.645528 master-0 kubenswrapper[33867]: I0219 03:24:02.645440 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-njtfp" Feb 19 03:24:02.665976 master-0 kubenswrapper[33867]: I0219 03:24:02.665929 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 19 03:24:02.673956 master-0 kubenswrapper[33867]: I0219 03:24:02.673914 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.685313 master-0 kubenswrapper[33867]: I0219 03:24:02.685219 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 03:24:02.695578 master-0 kubenswrapper[33867]: I0219 03:24:02.695527 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bab5125-f4d7-4940-891f-9bb6a2145fac-proxy-tls\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:02.706153 master-0 kubenswrapper[33867]: I0219 03:24:02.706107 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6bg2z" Feb 19 03:24:02.724377 master-0 kubenswrapper[33867]: I0219 03:24:02.724324 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 19 03:24:02.733042 master-0 kubenswrapper[33867]: I0219 03:24:02.732991 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-metrics-client-ca\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:02.733467 master-0 kubenswrapper[33867]: I0219 03:24:02.733414 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43560ec3-3526-40e1-aeb7-e3137a99171d-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:02.733915 master-0 kubenswrapper[33867]: I0219 03:24:02.733882 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e81865-21fa-4e35-a870-738c13ac5b70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.734056 master-0 kubenswrapper[33867]: I0219 03:24:02.734015 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:02.745075 master-0 kubenswrapper[33867]: I0219 03:24:02.745029 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 19 03:24:02.753287 master-0 kubenswrapper[33867]: I0219 03:24:02.753207 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2e81865-21fa-4e35-a870-738c13ac5b70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:02.764966 master-0 kubenswrapper[33867]: I0219 03:24:02.764909 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 03:24:02.774371 master-0 kubenswrapper[33867]: I0219 03:24:02.774243 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92804daf-1fd0-4008-afff-4f9bc362990b-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.785199 master-0 kubenswrapper[33867]: I0219 03:24:02.785134 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7wq8f" Feb 19 03:24:02.804811 master-0 kubenswrapper[33867]: I0219 03:24:02.804752 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 03:24:02.813865 master-0 kubenswrapper[33867]: I0219 03:24:02.813799 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.825342 master-0 kubenswrapper[33867]: I0219 03:24:02.825291 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 03:24:02.835028 master-0 kubenswrapper[33867]: I0219 03:24:02.834985 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92804daf-1fd0-4008-afff-4f9bc362990b-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:02.846133 master-0 kubenswrapper[33867]: I0219 03:24:02.846067 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 03:24:02.864841 master-0 kubenswrapper[33867]: I0219 03:24:02.864798 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-g8fsd" Feb 19 03:24:02.884773 master-0 kubenswrapper[33867]: I0219 03:24:02.884733 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 03:24:02.904454 master-0 kubenswrapper[33867]: I0219 03:24:02.904405 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 03:24:02.915191 master-0 kubenswrapper[33867]: I0219 03:24:02.915153 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-certs\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:02.925137 master-0 kubenswrapper[33867]: I0219 03:24:02.925084 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 03:24:02.933107 master-0 kubenswrapper[33867]: I0219 03:24:02.933057 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-node-bootstrap-token\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:02.944819 master-0 kubenswrapper[33867]: I0219 03:24:02.944689 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 19 03:24:02.952489 master-0 kubenswrapper[33867]: I0219 03:24:02.952448 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:02.965397 master-0 kubenswrapper[33867]: I0219 03:24:02.965332 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-x7jvh" Feb 19 03:24:02.983945 master-0 kubenswrapper[33867]: I0219 03:24:02.983874 33867 request.go:700] Waited for 2.00740911s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0 Feb 19 03:24:02.985843 master-0 kubenswrapper[33867]: I0219 03:24:02.985798 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 19 03:24:02.995463 master-0 kubenswrapper[33867]: I0219 03:24:02.995427 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2be4f9-f632-4a72-8f39-c96954403edc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:03.005529 master-0 kubenswrapper[33867]: I0219 03:24:03.005420 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:24:03.024763 master-0 kubenswrapper[33867]: I0219 03:24:03.024703 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 19 03:24:03.034756 master-0 kubenswrapper[33867]: I0219 03:24:03.034710 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af2be4f9-f632-4a72-8f39-c96954403edc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:03.044518 master-0 kubenswrapper[33867]: I0219 03:24:03.044469 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:24:03.065527 master-0 kubenswrapper[33867]: I0219 03:24:03.065467 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:24:03.074204 master-0 kubenswrapper[33867]: I0219 03:24:03.074166 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:03.085041 master-0 kubenswrapper[33867]: I0219 03:24:03.084968 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:24:03.105935 master-0 kubenswrapper[33867]: I0219 03:24:03.105870 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-mfb9m" Feb 19 03:24:03.125441 master-0 kubenswrapper[33867]: I0219 03:24:03.125320 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:24:03.134814 master-0 kubenswrapper[33867]: I0219 03:24:03.134760 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:03.145471 master-0 kubenswrapper[33867]: I0219 03:24:03.145400 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:24:03.165082 master-0 kubenswrapper[33867]: I0219 03:24:03.165015 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:24:03.174539 master-0 kubenswrapper[33867]: I0219 03:24:03.174475 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:03.185289 master-0 kubenswrapper[33867]: I0219 03:24:03.185212 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:24:03.196006 master-0 kubenswrapper[33867]: I0219 03:24:03.195853 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:03.205141 master-0 kubenswrapper[33867]: I0219 03:24:03.205067 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-26rv4" Feb 19 03:24:03.225513 master-0 kubenswrapper[33867]: I0219 03:24:03.225449 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 19 03:24:03.232083 master-0 kubenswrapper[33867]: I0219 03:24:03.232021 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:03.244520 master-0 kubenswrapper[33867]: I0219 03:24:03.244453 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-b5db9" Feb 19 03:24:03.265473 master-0 kubenswrapper[33867]: I0219 03:24:03.265413 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 19 03:24:03.275295 master-0 kubenswrapper[33867]: I0219 03:24:03.275216 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/43560ec3-3526-40e1-aeb7-e3137a99171d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:03.286330 master-0 kubenswrapper[33867]: I0219 03:24:03.286243 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 03:24:03.294698 master-0 kubenswrapper[33867]: I0219 03:24:03.294583 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a676c43c-4e0a-4826-86c1-288260611b09-cert\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:24:03.305199 master-0 kubenswrapper[33867]: I0219 03:24:03.305139 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xq85v" Feb 19 03:24:03.325479 master-0 kubenswrapper[33867]: I0219 03:24:03.325420 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 19 03:24:03.331178 master-0 kubenswrapper[33867]: I0219 03:24:03.331119 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:03.346050 master-0 kubenswrapper[33867]: I0219 03:24:03.345921 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 19 03:24:03.356105 master-0 kubenswrapper[33867]: I0219 03:24:03.356032 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:03.368642 master-0 kubenswrapper[33867]: I0219 03:24:03.368566 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 19 03:24:03.375861 master-0 kubenswrapper[33867]: I0219 03:24:03.375810 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-tls\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:03.388738 master-0 kubenswrapper[33867]: I0219 03:24:03.388413 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5w4jw" Feb 19 03:24:03.405553 master-0 kubenswrapper[33867]: I0219 03:24:03.405455 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:24:03.431804 master-0 kubenswrapper[33867]: I0219 03:24:03.431740 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:24:03.434173 master-0 kubenswrapper[33867]: I0219 03:24:03.434133 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:03.444806 master-0 kubenswrapper[33867]: I0219 03:24:03.444746 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:24:03.466131 master-0 kubenswrapper[33867]: I0219 03:24:03.465984 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:24:03.473531 master-0 kubenswrapper[33867]: I0219 03:24:03.473490 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:03.484747 master-0 kubenswrapper[33867]: I0219 03:24:03.484696 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:24:03.494138 master-0 kubenswrapper[33867]: I0219 03:24:03.494084 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:03.505541 master-0 kubenswrapper[33867]: I0219 03:24:03.505496 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-jmtfb" Feb 19 03:24:03.524562 master-0 kubenswrapper[33867]: I0219 03:24:03.524510 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 03:24:03.544858 master-0 kubenswrapper[33867]: I0219 03:24:03.544806 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 03:24:03.564323 master-0 kubenswrapper[33867]: I0219 03:24:03.564235 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 19 03:24:03.564583 master-0 kubenswrapper[33867]: I0219 03:24:03.564548 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:03.585356 master-0 kubenswrapper[33867]: I0219 03:24:03.585248 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 19 03:24:03.599477 master-0 kubenswrapper[33867]: I0219 03:24:03.593612 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:03.603368 master-0 kubenswrapper[33867]: E0219 03:24:03.603320 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.603464 master-0 kubenswrapper[33867]: E0219 03:24:03.603410 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.603389533 +0000 UTC m=+49.900060144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.603755 master-0 kubenswrapper[33867]: E0219 03:24:03.603683 33867 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.603813 master-0 kubenswrapper[33867]: E0219 03:24:03.603750 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:03.603813 master-0 kubenswrapper[33867]: E0219 03:24:03.603699 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.603883 master-0 kubenswrapper[33867]: E0219 03:24:03.603826 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.603803534 +0000 UTC m=+49.900474145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:03.603883 master-0 kubenswrapper[33867]: E0219 03:24:03.603862 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.603839315 +0000 UTC m=+49.900509986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.604634 master-0 kubenswrapper[33867]: E0219 03:24:03.603904 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs podName:7be6f9b5-fe27-4df5-b933-63bbb12f680c nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.603889667 +0000 UTC m=+49.900560378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs") pod "multus-admission-controller-5f54bf67d4-9zr4h" (UID: "7be6f9b5-fe27-4df5-b933-63bbb12f680c") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.604856 master-0 kubenswrapper[33867]: E0219 03:24:03.604830 33867 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-b5da4s4ugo88o: failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.604995 master-0 kubenswrapper[33867]: E0219 03:24:03.604873 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.604861224 +0000 UTC m=+49.901531825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync secret cache: timed out waiting for the condition Feb 19 03:24:03.604995 master-0 kubenswrapper[33867]: E0219 03:24:03.604912 33867 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:03.605067 master-0 kubenswrapper[33867]: E0219 03:24:03.605014 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle podName:22370ccf-c383-4c1e-96f2-b5c61bb0cebe nodeName:}" failed. No retries permitted until 2026-02-19 03:24:04.604997788 +0000 UTC m=+49.901668429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle") pod "metrics-server-68d9f4c46b-mh59n" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe") : failed to sync configmap cache: timed out waiting for the condition Feb 19 03:24:03.606398 master-0 kubenswrapper[33867]: I0219 03:24:03.606332 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 03:24:03.640290 master-0 kubenswrapper[33867]: I0219 03:24:03.634541 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p55fn" Feb 19 03:24:03.647865 master-0 kubenswrapper[33867]: I0219 03:24:03.647783 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 19 03:24:03.665358 master-0 kubenswrapper[33867]: I0219 03:24:03.665249 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-rqfgf" Feb 19 03:24:03.685754 master-0 kubenswrapper[33867]: I0219 03:24:03.685683 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 19 03:24:03.705228 master-0 kubenswrapper[33867]: I0219 03:24:03.705167 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kjppx" Feb 19 03:24:03.725165 master-0 kubenswrapper[33867]: I0219 03:24:03.725012 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 19 03:24:03.746156 master-0 kubenswrapper[33867]: I0219 03:24:03.746059 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-b5da4s4ugo88o" Feb 19 03:24:03.766111 master-0 kubenswrapper[33867]: I0219 03:24:03.766051 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 19 03:24:03.785558 master-0 kubenswrapper[33867]: I0219 03:24:03.785492 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 19 03:24:03.813224 master-0 kubenswrapper[33867]: I0219 03:24:03.813091 33867 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="687e92a6cecf1e2beeef16a0b322ad08" killPodOptions="" Feb 19 03:24:03.813468 master-0 kubenswrapper[33867]: E0219 03:24:03.813454 33867 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.831s" Feb 19 03:24:03.813587 master-0 kubenswrapper[33867]: I0219 03:24:03.813560 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 19 03:24:03.813647 master-0 kubenswrapper[33867]: I0219 03:24:03.813615 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:24:03.826210 master-0 kubenswrapper[33867]: I0219 03:24:03.826159 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 19 03:24:03.826633 master-0 kubenswrapper[33867]: I0219 03:24:03.826609 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 19 03:24:03.846440 master-0 kubenswrapper[33867]: I0219 03:24:03.846178 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"route-controller-manager-895bf76d5-65vdk\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:03.858949 master-0 kubenswrapper[33867]: I0219 03:24:03.858700 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlvj\" (UniqueName: \"kubernetes.io/projected/80c48134-cb22-4cf9-b076-ce39af2f4113-kube-api-access-2dlvj\") pod \"cluster-monitoring-operator-6bb6d78bf-2vmxq\" (UID: \"80c48134-cb22-4cf9-b076-ce39af2f4113\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq" Feb 19 03:24:03.878770 master-0 kubenswrapper[33867]: I0219 03:24:03.878711 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c9ed390-3b62-4b81-8c03-0c579a4a686a-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-k7xlc\" (UID: \"6c9ed390-3b62-4b81-8c03-0c579a4a686a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc" Feb 19 03:24:03.895968 master-0 kubenswrapper[33867]: I0219 03:24:03.895911 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfd9\" (UniqueName: \"kubernetes.io/projected/7ca08cc0-cc64-4e13-9465-c9b0bfacb60d-kube-api-access-qxfd9\") pod \"machine-config-server-m64bf\" (UID: \"7ca08cc0-cc64-4e13-9465-c9b0bfacb60d\") " pod="openshift-machine-config-operator/machine-config-server-m64bf" Feb 19 03:24:03.915205 master-0 kubenswrapper[33867]: I0219 03:24:03.915138 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q4lp\" (UniqueName: \"kubernetes.io/projected/4fd49d14-d513-4f68-8a87-3cef8a033c58-kube-api-access-5q4lp\") pod \"network-check-target-c6c25\" (UID: \"4fd49d14-d513-4f68-8a87-3cef8a033c58\") " pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:24:03.948609 master-0 kubenswrapper[33867]: I0219 03:24:03.948531 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq48l\" (UniqueName: \"kubernetes.io/projected/5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4-kube-api-access-bq48l\") pod \"insights-operator-59b498fcfb-2dvkr\" (UID: \"5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4\") " pod="openshift-insights/insights-operator-59b498fcfb-2dvkr" Feb 19 03:24:03.964304 master-0 kubenswrapper[33867]: I0219 03:24:03.964206 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5301cbc9-b3f3-4b2d-a114-1ba0752462f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-w5pp8\" (UID: \"5301cbc9-b3f3-4b2d-a114-1ba0752462f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8" Feb 19 03:24:03.976994 master-0 kubenswrapper[33867]: I0219 03:24:03.976835 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:03.996604 master-0 kubenswrapper[33867]: I0219 03:24:03.996527 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxnk\" (UniqueName: \"kubernetes.io/projected/2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5-kube-api-access-vdxnk\") pod \"cluster-node-tuning-operator-bcf775fc9-dcpwb\" (UID: \"2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb" Feb 19 03:24:04.003279 master-0 kubenswrapper[33867]: I0219 03:24:04.003135 33867 request.go:700] Waited for 2.964283756s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Feb 19 03:24:04.015545 master-0 kubenswrapper[33867]: I0219 03:24:04.015483 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9d8\" (UniqueName: \"kubernetes.io/projected/78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda-kube-api-access-rn9d8\") pod \"openshift-config-operator-6f47d587d6-zn8c7\" (UID: \"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:04.036448 master-0 kubenswrapper[33867]: I0219 03:24:04.036388 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz8r\" (UniqueName: \"kubernetes.io/projected/ace60ebd-e405-4fd2-96fe-7b16a9e11a07-kube-api-access-rrz8r\") pod \"apiserver-85f97c6ffb-qfcnk\" (UID: \"ace60ebd-e405-4fd2-96fe-7b16a9e11a07\") " pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:04.055805 master-0 kubenswrapper[33867]: I0219 03:24:04.055737 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2btm8\" (UniqueName: \"kubernetes.io/projected/ca82f2e9-884e-49d1-9863-e87212d01edc-kube-api-access-2btm8\") pod \"certified-operators-5t9dd\" (UID: \"ca82f2e9-884e-49d1-9863-e87212d01edc\") " pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:04.076706 master-0 kubenswrapper[33867]: I0219 03:24:04.076626 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhdv\" (UniqueName: \"kubernetes.io/projected/58c6f5a2-c0a8-4636-a057-cedbe0151579-kube-api-access-grhdv\") pod \"marketplace-operator-6f5488b997-xxdh5\" (UID: \"58c6f5a2-c0a8-4636-a057-cedbe0151579\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:04.098943 master-0 kubenswrapper[33867]: I0219 03:24:04.098885 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghmn\" (UniqueName: \"kubernetes.io/projected/858a717b-a44e-4b8d-9974-7451a89cf104-kube-api-access-qghmn\") pod \"cloud-credential-operator-6968c58f46-p2hfn\" (UID: \"858a717b-a44e-4b8d-9974-7451a89cf104\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn" Feb 19 03:24:04.119947 master-0 kubenswrapper[33867]: I0219 03:24:04.119896 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8qd\" (UniqueName: \"kubernetes.io/projected/fbc2f7d0-4bae-4d4a-b041-a624ec2b9333-kube-api-access-8p8qd\") pod \"openshift-apiserver-operator-8586dccc9b-mcz8l\" (UID: \"fbc2f7d0-4bae-4d4a-b041-a624ec2b9333\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l" Feb 19 03:24:04.146826 master-0 kubenswrapper[33867]: I0219 03:24:04.146772 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj4rq\" (UniqueName: \"kubernetes.io/projected/b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651-kube-api-access-mj4rq\") pod \"authentication-operator-5bd7c86784-cjz9l\" (UID: \"b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l" Feb 19 03:24:04.170364 master-0 kubenswrapper[33867]: I0219 03:24:04.170310 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd6c\" (UniqueName: \"kubernetes.io/projected/76529f4c-70b1-4fcb-ba48-ae929228f9fc-kube-api-access-wfd6c\") pod \"redhat-operators-v9c2b\" (UID: \"76529f4c-70b1-4fcb-ba48-ae929228f9fc\") " pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:04.192005 master-0 kubenswrapper[33867]: I0219 03:24:04.191928 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwp5\" (UniqueName: \"kubernetes.io/projected/78702d1c-b5ab-4e00-92da-cb2513a72024-kube-api-access-5pwp5\") pod \"tuned-4jl4c\" (UID: \"78702d1c-b5ab-4e00-92da-cb2513a72024\") " pod="openshift-cluster-node-tuning-operator/tuned-4jl4c" Feb 19 03:24:04.209445 master-0 kubenswrapper[33867]: I0219 03:24:04.209383 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76css\" (UniqueName: \"kubernetes.io/projected/b283bd8e-3339-4701-ae3c-f009e498b7d4-kube-api-access-76css\") pod \"olm-operator-5499d7f7bb-kk77t\" (UID: \"b283bd8e-3339-4701-ae3c-f009e498b7d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:04.215995 master-0 kubenswrapper[33867]: I0219 03:24:04.215942 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txq5k\" (UniqueName: \"kubernetes.io/projected/a59746bb-7d76-4fd7-8323-5b92be63afb9-kube-api-access-txq5k\") pod \"cluster-image-registry-operator-779979bdf7-cfdqh\" (UID: \"a59746bb-7d76-4fd7-8323-5b92be63afb9\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh" Feb 19 03:24:04.246397 master-0 kubenswrapper[33867]: I0219 03:24:04.245909 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-bound-sa-token\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:04.252009 master-0 kubenswrapper[33867]: I0219 03:24:04.251948 33867 scope.go:117] "RemoveContainer" containerID="92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" Feb 19 03:24:04.259729 master-0 kubenswrapper[33867]: I0219 03:24:04.259657 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2v2\" (UniqueName: \"kubernetes.io/projected/af5828ea-090f-4c8f-90e6-c4e405e69ec5-kube-api-access-tb2v2\") pod \"cluster-baremetal-operator-d6bb9bb76-9vgg7\" (UID: \"af5828ea-090f-4c8f-90e6-c4e405e69ec5\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7" Feb 19 03:24:04.281192 master-0 kubenswrapper[33867]: I0219 03:24:04.281128 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxmv\" (UniqueName: \"kubernetes.io/projected/8ec16b3a-5d5c-46fe-87f0-89f93a2775ed-kube-api-access-jzxmv\") pod \"node-exporter-8g26m\" (UID: \"8ec16b3a-5d5c-46fe-87f0-89f93a2775ed\") " pod="openshift-monitoring/node-exporter-8g26m" Feb 19 03:24:04.299332 master-0 kubenswrapper[33867]: I0219 03:24:04.298568 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"controller-manager-7b74b5f84f-v8ldx\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:04.330823 master-0 kubenswrapper[33867]: I0219 03:24:04.327998 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lwt\" (UniqueName: \"kubernetes.io/projected/7fde19c2-64b1-409c-ad9c-2bb213a1cc74-kube-api-access-64lwt\") pod \"multus-4lzdj\" (UID: \"7fde19c2-64b1-409c-ad9c-2bb213a1cc74\") " pod="openshift-multus/multus-4lzdj" Feb 19 03:24:04.352159 master-0 kubenswrapper[33867]: I0219 03:24:04.352037 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqt9k\" (UniqueName: \"kubernetes.io/projected/1f9e07d3-d157-4948-84a6-04b8aa7eef4c-kube-api-access-nqt9k\") pod \"cluster-olm-operator-5bd7768f54-f8dfs\" (UID: \"1f9e07d3-d157-4948-84a6-04b8aa7eef4c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs" Feb 19 03:24:04.361129 master-0 kubenswrapper[33867]: I0219 03:24:04.361082 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2vc\" (UniqueName: \"kubernetes.io/projected/dabc3c9b-ed58-4fd4-8735-65d504fa299a-kube-api-access-vw2vc\") pod \"community-operators-nrcnx\" (UID: \"dabc3c9b-ed58-4fd4-8735-65d504fa299a\") " pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:04.378997 master-0 kubenswrapper[33867]: I0219 03:24:04.378957 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwbk\" (UniqueName: \"kubernetes.io/projected/33bb562f-84e7-4fcb-b008-416c09a5ecf0-kube-api-access-5kwbk\") pod \"cluster-autoscaler-operator-86b8dc6d6-pd8lj\" (UID: \"33bb562f-84e7-4fcb-b008-416c09a5ecf0\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj" Feb 19 03:24:04.397121 master-0 kubenswrapper[33867]: I0219 03:24:04.397073 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.421324 master-0 kubenswrapper[33867]: I0219 03:24:04.421287 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/255784ad-b52a-4c5c-ad15-278865ee2ccb-kube-api-access-hxsxw\") pod \"machine-api-operator-5c7cf458b4-prbs7\" (UID: \"255784ad-b52a-4c5c-ad15-278865ee2ccb\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7" Feb 19 03:24:04.439155 master-0 kubenswrapper[33867]: I0219 03:24:04.439118 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61abb34a-08f0-4438-9a89-c712b2048878-kube-api-access\") pod \"cluster-version-operator-57476485-qjgq9\" (UID: \"61abb34a-08f0-4438-9a89-c712b2048878\") " pod="openshift-cluster-version/cluster-version-operator-57476485-qjgq9" Feb 19 03:24:04.469439 master-0 kubenswrapper[33867]: I0219 03:24:04.469393 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrksf\" (UniqueName: \"kubernetes.io/projected/05c9cb4a-5249-4116-a2e5-caa7859e2075-kube-api-access-qrksf\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7c8v\" (UID: \"05c9cb4a-5249-4116-a2e5-caa7859e2075\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v" Feb 19 03:24:04.489429 master-0 kubenswrapper[33867]: I0219 03:24:04.489385 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clddw\" (UniqueName: \"kubernetes.io/projected/7b137033-0db2-46c9-a526-f8234345e883-kube-api-access-clddw\") pod \"machine-config-daemon-j2wxd\" (UID: \"7b137033-0db2-46c9-a526-f8234345e883\") " pod="openshift-machine-config-operator/machine-config-daemon-j2wxd" Feb 19 03:24:04.507157 master-0 kubenswrapper[33867]: I0219 03:24:04.507079 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99z6r\" (UniqueName: \"kubernetes.io/projected/0664d88f-f697-4182-93cd-f208ff6f3ac2-kube-api-access-99z6r\") pod \"control-plane-machine-set-operator-686847ff5f-xbcf5\" (UID: \"0664d88f-f697-4182-93cd-f208ff6f3ac2\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5" Feb 19 03:24:04.527569 master-0 kubenswrapper[33867]: I0219 03:24:04.527515 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq27v\" (UniqueName: \"kubernetes.io/projected/98ac5423-b231-44e5-9545-424d635ed6ee-kube-api-access-bq27v\") pod \"package-server-manager-5c75f78c8b-8tbg8\" (UID: \"98ac5423-b231-44e5-9545-424d635ed6ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:04.546110 master-0 kubenswrapper[33867]: I0219 03:24:04.546064 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j8c\" (UniqueName: \"kubernetes.io/projected/4c3267e5-390a-40a3-bff8-1d1d81fb9a17-kube-api-access-k6j8c\") pod \"etcd-operator-545bf96f4d-r7r6p\" (UID: \"4c3267e5-390a-40a3-bff8-1d1d81fb9a17\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" Feb 19 03:24:04.550745 master-0 kubenswrapper[33867]: I0219 03:24:04.550708 33867 scope.go:117] "RemoveContainer" containerID="028495f0aee3ee18d27a6df8f41026b434ac3c3d335cf96c6e2e88bafe3758a1" Feb 19 03:24:04.562376 master-0 kubenswrapper[33867]: I0219 03:24:04.562320 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv24m\" (UniqueName: \"kubernetes.io/projected/a52be87c-e707-4269-96da-537708d52b64-kube-api-access-kv24m\") pod \"network-node-identity-rm5jg\" (UID: \"a52be87c-e707-4269-96da-537708d52b64\") " pod="openshift-network-node-identity/network-node-identity-rm5jg" Feb 19 03:24:04.579631 master-0 kubenswrapper[33867]: I0219 03:24:04.579583 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc87d\" (UniqueName: \"kubernetes.io/projected/59cea4cb-6374-49b6-97b3-d8a19cc1860f-kube-api-access-tc87d\") pod \"cluster-samples-operator-65c5c48b9b-hl874\" (UID: \"59cea4cb-6374-49b6-97b3-d8a19cc1860f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874" Feb 19 03:24:04.601452 master-0 kubenswrapper[33867]: I0219 03:24:04.601363 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwbx\" (UniqueName: \"kubernetes.io/projected/2b9d54aa-5f71-4a82-8e71-401ed3083a13-kube-api-access-vjwbx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-866f9\" (UID: \"2b9d54aa-5f71-4a82-8e71-401ed3083a13\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" Feb 19 03:24:04.623431 master-0 kubenswrapper[33867]: I0219 03:24:04.623358 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhlw\" (UniqueName: \"kubernetes.io/projected/1bab5125-f4d7-4940-891f-9bb6a2145fac-kube-api-access-7rhlw\") pod \"machine-config-controller-54cb48566c-5t75l\" (UID: \"1bab5125-f4d7-4940-891f-9bb6a2145fac\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l" Feb 19 03:24:04.638716 master-0 kubenswrapper[33867]: I0219 03:24:04.638656 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdqx\" (UniqueName: \"kubernetes.io/projected/9ff96ce8-6427-4a42-afa6-8b8bc778f094-kube-api-access-cpdqx\") pod \"ingress-operator-6569778c84-qcd49\" (UID: \"9ff96ce8-6427-4a42-afa6-8b8bc778f094\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-qcd49" Feb 19 03:24:04.652193 master-0 kubenswrapper[33867]: I0219 03:24:04.652117 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.652193 master-0 kubenswrapper[33867]: I0219 03:24:04.652201 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.652875 master-0 kubenswrapper[33867]: I0219 03:24:04.652247 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:04.652875 master-0 kubenswrapper[33867]: I0219 03:24:04.652392 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.652875 master-0 kubenswrapper[33867]: I0219 03:24:04.652452 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.652875 master-0 kubenswrapper[33867]: I0219 03:24:04.652629 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.653040 master-0 kubenswrapper[33867]: I0219 03:24:04.652973 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.653370 master-0 kubenswrapper[33867]: I0219 03:24:04.653336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.653896 master-0 kubenswrapper[33867]: I0219 03:24:04.653851 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.654071 master-0 kubenswrapper[33867]: I0219 03:24:04.654032 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.654127 master-0 kubenswrapper[33867]: I0219 03:24:04.654095 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7be6f9b5-fe27-4df5-b933-63bbb12f680c-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:04.654838 master-0 kubenswrapper[33867]: I0219 03:24:04.654796 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"metrics-server-68d9f4c46b-mh59n\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:04.657173 master-0 kubenswrapper[33867]: I0219 03:24:04.657135 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9vm\" (UniqueName: \"kubernetes.io/projected/c50a2aec-7ed0-4114-8b25-19579fe931cb-kube-api-access-7n9vm\") pod \"catalog-operator-596f79dd6f-sbzsk\" (UID: \"c50a2aec-7ed0-4114-8b25-19579fe931cb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:04.664520 master-0 kubenswrapper[33867]: I0219 03:24:04.664478 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/6.log" Feb 19 03:24:04.676164 master-0 kubenswrapper[33867]: I0219 03:24:04.676117 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4lz\" (UniqueName: \"kubernetes.io/projected/ec677f3d-06c4-4cf4-9f24-69894b9a9118-kube-api-access-vh4lz\") pod \"kube-state-metrics-59584d565f-m7mdb\" (UID: \"ec677f3d-06c4-4cf4-9f24-69894b9a9118\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-m7mdb" Feb 19 03:24:04.696875 master-0 kubenswrapper[33867]: I0219 03:24:04.696828 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2wm\" (UniqueName: \"kubernetes.io/projected/7012676e-f35d-46e5-83e8-a63172dd076e-kube-api-access-lm2wm\") pod \"catalogd-controller-manager-84b8d9d697-jhj9q\" (UID: \"7012676e-f35d-46e5-83e8-a63172dd076e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:04.718970 master-0 kubenswrapper[33867]: I0219 03:24:04.718901 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78j6f\" (UniqueName: \"kubernetes.io/projected/92804daf-1fd0-4008-afff-4f9bc362990b-kube-api-access-78j6f\") pod \"machine-approver-7dd9c7d7b9-tlhpc\" (UID: \"92804daf-1fd0-4008-afff-4f9bc362990b\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc" Feb 19 03:24:04.737671 master-0 kubenswrapper[33867]: I0219 03:24:04.737618 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmpd\" (UniqueName: \"kubernetes.io/projected/d6fae256-6a2e-45e7-8f2f-d471f46ad3b2-kube-api-access-dhmpd\") pod \"csi-snapshot-controller-operator-6fb4df594f-mtqxj\" (UID: \"d6fae256-6a2e-45e7-8f2f-d471f46ad3b2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj" Feb 19 03:24:04.757350 master-0 kubenswrapper[33867]: I0219 03:24:04.756376 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zww\" (UniqueName: \"kubernetes.io/projected/a676c43c-4e0a-4826-86c1-288260611b09-kube-api-access-p9zww\") pod \"ingress-canary-bbwkg\" (UID: \"a676c43c-4e0a-4826-86c1-288260611b09\") " pod="openshift-ingress-canary/ingress-canary-bbwkg" Feb 19 03:24:04.781589 master-0 kubenswrapper[33867]: I0219 03:24:04.781535 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvxh\" (UniqueName: \"kubernetes.io/projected/c8f325fb-0075-4a18-ba7e-669ab19bc91a-kube-api-access-jxvxh\") pod \"csi-snapshot-controller-6847bb4785-6trsd\" (UID: \"c8f325fb-0075-4a18-ba7e-669ab19bc91a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" Feb 19 03:24:04.796140 master-0 kubenswrapper[33867]: I0219 03:24:04.796092 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgff\" (UniqueName: \"kubernetes.io/projected/e2e81865-21fa-4e35-a870-738c13ac5b70-kube-api-access-5tgff\") pod \"prometheus-operator-754bc4d665-tkbxr\" (UID: \"e2e81865-21fa-4e35-a870-738c13ac5b70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-tkbxr" Feb 19 03:24:04.816023 master-0 kubenswrapper[33867]: I0219 03:24:04.815973 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpth\" (UniqueName: \"kubernetes.io/projected/3edc7410-417a-4e55-9276-ac271fd52297-kube-api-access-vzpth\") pod \"service-ca-operator-c48c8bf7c-f7fvc\" (UID: \"3edc7410-417a-4e55-9276-ac271fd52297\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" Feb 19 03:24:04.841790 master-0 kubenswrapper[33867]: I0219 03:24:04.841733 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz72\" (UniqueName: \"kubernetes.io/projected/75c58162-a0ba-40f4-8894-38f17dc2fb6d-kube-api-access-gkz72\") pod \"dns-default-clndn\" (UID: \"75c58162-a0ba-40f4-8894-38f17dc2fb6d\") " pod="openshift-dns/dns-default-clndn" Feb 19 03:24:04.850338 master-0 kubenswrapper[33867]: I0219 03:24:04.848669 33867 scope.go:117] "RemoveContainer" containerID="e103e135bf82f2eb93c3dbb2b40a81ffeb2314273026f2e9a0c0e8f111555646" Feb 19 03:24:04.851369 master-0 kubenswrapper[33867]: I0219 03:24:04.851257 33867 scope.go:117] "RemoveContainer" containerID="19a1f28fd6894887f54799dd664b3153aee457ecc2c8aab80e319ccb1bdbf8a2" Feb 19 03:24:04.854626 master-0 kubenswrapper[33867]: I0219 03:24:04.854578 33867 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:24:04.862597 master-0 kubenswrapper[33867]: I0219 03:24:04.862455 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhg6\" (UniqueName: \"kubernetes.io/projected/af2be4f9-f632-4a72-8f39-c96954403edc-kube-api-access-rhhg6\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t\" (UID: \"af2be4f9-f632-4a72-8f39-c96954403edc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t" Feb 19 03:24:04.887180 master-0 kubenswrapper[33867]: I0219 03:24:04.887114 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsbq\" (UniqueName: \"kubernetes.io/projected/67f4e002-26fb-41e3-abdb-f4928b6c561f-kube-api-access-wqsbq\") pod \"dns-operator-8c7d49845-jlnvw\" (UID: \"67f4e002-26fb-41e3-abdb-f4928b6c561f\") " pod="openshift-dns-operator/dns-operator-8c7d49845-jlnvw" Feb 19 03:24:04.910349 master-0 kubenswrapper[33867]: I0219 03:24:04.906796 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqm5\" (UniqueName: \"kubernetes.io/projected/decd8c56-e0f0-4119-917f-56652c8f8372-kube-api-access-8tqm5\") pod \"iptables-alerter-kvvll\" (UID: \"decd8c56-e0f0-4119-917f-56652c8f8372\") " pod="openshift-network-operator/iptables-alerter-kvvll" Feb 19 03:24:04.921230 master-0 kubenswrapper[33867]: I0219 03:24:04.921188 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crz8x\" (UniqueName: \"kubernetes.io/projected/15a571c6-7c47-4b57-bc5b-e46544a114c8-kube-api-access-crz8x\") pod \"ovnkube-control-plane-5d8dfcdc87-7bv4h\" (UID: \"15a571c6-7c47-4b57-bc5b-e46544a114c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h" Feb 19 03:24:04.937730 master-0 kubenswrapper[33867]: I0219 03:24:04.937695 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk722\" (UniqueName: \"kubernetes.io/projected/7be6f9b5-fe27-4df5-b933-63bbb12f680c-kube-api-access-mk722\") pod \"multus-admission-controller-5f54bf67d4-9zr4h\" (UID: \"7be6f9b5-fe27-4df5-b933-63bbb12f680c\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h" Feb 19 03:24:04.955550 master-0 kubenswrapper[33867]: I0219 03:24:04.955503 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zzd\" (UniqueName: \"kubernetes.io/projected/6ae2cbe0-aa0a-4f26-994b-660fb962d995-kube-api-access-46zzd\") pod \"network-metrics-daemon-hspwc\" (UID: \"6ae2cbe0-aa0a-4f26-994b-660fb962d995\") " pod="openshift-multus/network-metrics-daemon-hspwc" Feb 19 03:24:04.976736 master-0 kubenswrapper[33867]: I0219 03:24:04.976688 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wsp\" (UniqueName: \"kubernetes.io/projected/cc8f6a27-3dd3-45e0-a206-9f19bbf99df7-kube-api-access-r5wsp\") pod \"multus-additional-cni-plugins-bs5qd\" (UID: \"cc8f6a27-3dd3-45e0-a206-9f19bbf99df7\") " pod="openshift-multus/multus-additional-cni-plugins-bs5qd" Feb 19 03:24:04.996341 master-0 kubenswrapper[33867]: I0219 03:24:04.996301 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4z8t\" (UniqueName: \"kubernetes.io/projected/43560ec3-3526-40e1-aeb7-e3137a99171d-kube-api-access-j4z8t\") pod \"openshift-state-metrics-6dbff8cb4c-4ccjj\" (UID: \"43560ec3-3526-40e1-aeb7-e3137a99171d\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj" Feb 19 03:24:05.003588 master-0 kubenswrapper[33867]: I0219 03:24:05.003509 33867 request.go:700] Waited for 3.955226091s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token Feb 19 03:24:05.017561 master-0 kubenswrapper[33867]: I0219 03:24:05.017401 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zxf\" (UniqueName: \"kubernetes.io/projected/1cf1a1c6-f858-4f89-ac8c-97d13ed8a962-kube-api-access-h6zxf\") pod \"machine-config-operator-7f8c75f984-qsbx7\" (UID: \"1cf1a1c6-f858-4f89-ac8c-97d13ed8a962\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7" Feb 19 03:24:05.038698 master-0 kubenswrapper[33867]: I0219 03:24:05.038650 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfgk\" (UniqueName: \"kubernetes.io/projected/a71c6d42-5ff9-4e96-900c-6e2166bbc9e3-kube-api-access-zrfgk\") pod \"network-check-source-58fb6744f5-mh46g\" (UID: \"a71c6d42-5ff9-4e96-900c-6e2166bbc9e3\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g" Feb 19 03:24:05.063404 master-0 kubenswrapper[33867]: I0219 03:24:05.063367 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwjp\" (UniqueName: \"kubernetes.io/projected/2576028c-40d8-4ef4-ba41-a5aff01f2ed3-kube-api-access-tmwjp\") pod \"packageserver-7d77f88776-s4jxm\" (UID: \"2576028c-40d8-4ef4-ba41-a5aff01f2ed3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:05.076172 master-0 kubenswrapper[33867]: I0219 03:24:05.076100 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfcl\" (UniqueName: \"kubernetes.io/projected/18b29e37-cda9-41a8-a910-3d8f74be3cf3-kube-api-access-bkfcl\") pod \"service-ca-576b4d78bd-92gqk\" (UID: \"18b29e37-cda9-41a8-a910-3d8f74be3cf3\") " pod="openshift-service-ca/service-ca-576b4d78bd-92gqk" Feb 19 03:24:05.100423 master-0 kubenswrapper[33867]: I0219 03:24:05.100314 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cm45\" (UniqueName: \"kubernetes.io/projected/a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a-kube-api-access-8cm45\") pod \"ovnkube-node-pw7dx\" (UID: \"a06b88f6-101e-47bf-a6cf-f5fcfa47ad2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.119630 master-0 kubenswrapper[33867]: I0219 03:24:05.119580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmbc\" (UniqueName: \"kubernetes.io/projected/546cf649-8e0d-4c8a-a197-412db42e36b6-kube-api-access-htmbc\") pod \"redhat-marketplace-nqnbc\" (UID: \"546cf649-8e0d-4c8a-a197-412db42e36b6\") " pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:05.137318 master-0 kubenswrapper[33867]: I0219 03:24:05.136561 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbffz\" (UniqueName: \"kubernetes.io/projected/c791d8d0-6d78-4cdc-bac2-aa39bd3aae21-kube-api-access-gbffz\") pod \"network-operator-7d7db75979-jbztp\" (UID: \"c791d8d0-6d78-4cdc-bac2-aa39bd3aae21\") " pod="openshift-network-operator/network-operator-7d7db75979-jbztp" Feb 19 03:24:05.166236 master-0 kubenswrapper[33867]: I0219 03:24:05.166182 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4714ef51-2d24-4938-8c58-80c1485a368b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-lbfvq\" (UID: \"4714ef51-2d24-4938-8c58-80c1485a368b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" Feb 19 03:24:05.184528 master-0 kubenswrapper[33867]: I0219 03:24:05.184471 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hzx\" (UniqueName: \"kubernetes.io/projected/494087b2-b532-4c62-89d5-b88a152fa5db-kube-api-access-z4hzx\") pod \"cluster-storage-operator-f94476f49-dnfs9\" (UID: \"494087b2-b532-4c62-89d5-b88a152fa5db\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9" Feb 19 03:24:05.197239 master-0 kubenswrapper[33867]: I0219 03:24:05.197184 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msl9t\" (UniqueName: \"kubernetes.io/projected/67624ad2-babb-4b0e-9599-99325c286b22-kube-api-access-msl9t\") pod \"node-resolver-4qvfn\" (UID: \"67624ad2-babb-4b0e-9599-99325c286b22\") " pod="openshift-dns/node-resolver-4qvfn" Feb 19 03:24:05.217186 master-0 kubenswrapper[33867]: I0219 03:24:05.217134 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm2l\" (UniqueName: \"kubernetes.io/projected/c4ed0c32-c13f-4c72-b83f-9af19b2950a3-kube-api-access-rkm2l\") pod \"migrator-5c85bff57-85d6g\" (UID: \"c4ed0c32-c13f-4c72-b83f-9af19b2950a3\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g" Feb 19 03:24:05.236412 master-0 kubenswrapper[33867]: I0219 03:24:05.236357 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkxh\" (UniqueName: \"kubernetes.io/projected/8f7d8fc8-c313-416f-b62b-b54db9944066-kube-api-access-9dkxh\") pod \"operator-controller-controller-manager-9cc7d7bb-s559q\" (UID: \"8f7d8fc8-c313-416f-b62b-b54db9944066\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:05.256971 master-0 kubenswrapper[33867]: I0219 03:24:05.256933 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:05.283689 master-0 kubenswrapper[33867]: I0219 03:24:05.283580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894cz\" (UniqueName: \"kubernetes.io/projected/c569676a-51dd-418c-87a5-719c18fe4c95-kube-api-access-894cz\") pod \"apiserver-957b9456f-f5s8c\" (UID: \"c569676a-51dd-418c-87a5-719c18fe4c95\") " pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:05.296872 master-0 kubenswrapper[33867]: E0219 03:24:05.296839 33867 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 19 03:24:05.296872 master-0 kubenswrapper[33867]: E0219 03:24:05.296866 33867 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 19 03:24:05.297062 master-0 kubenswrapper[33867]: E0219 03:24:05.296933 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access podName:3fab5bbd-672c-4e18-9c1e-438e2360bc54 nodeName:}" failed. No retries permitted until 2026-02-19 03:24:05.796909988 +0000 UTC m=+51.093580599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access") pod "installer-3-master-0" (UID: "3fab5bbd-672c-4e18-9c1e-438e2360bc54") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 19 03:24:05.323534 master-0 kubenswrapper[33867]: I0219 03:24:05.323494 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9hn\" (UniqueName: \"kubernetes.io/projected/76470062-ab83-47ed-a669-deeb71996548-kube-api-access-bj9hn\") pod \"router-default-7b65dc9fcb-t6jnq\" (UID: \"76470062-ab83-47ed-a669-deeb71996548\") " pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:05.368977 master-0 kubenswrapper[33867]: E0219 03:24:05.368916 33867 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.555s" Feb 19 03:24:05.368977 master-0 kubenswrapper[33867]: I0219 03:24:05.368954 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") pod \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\" (UID: \"3fab5bbd-672c-4e18-9c1e-438e2360bc54\") " Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.368991 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369025 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92" Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369038 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc"} Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369062 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369073 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3fab5bbd-672c-4e18-9c1e-438e2360bc54","Type":"ContainerDied","Data":"3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318"} Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369093 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d24aaf417d59fb450308aa24f5e0ecd8e28bc338934b0ef78ad3e79bccb9318" Feb 19 03:24:05.369175 master-0 kubenswrapper[33867]: I0219 03:24:05.369107 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:24:05.372162 master-0 kubenswrapper[33867]: I0219 03:24:05.372126 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3fab5bbd-672c-4e18-9c1e-438e2360bc54" (UID: "3fab5bbd-672c-4e18-9c1e-438e2360bc54"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:24:05.379211 master-0 kubenswrapper[33867]: I0219 03:24:05.379155 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410669 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410716 33867 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="009e56e8-3ee1-4208-b099-958ed2bf1c90" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410745 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410755 33867 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="009e56e8-3ee1-4208-b099-958ed2bf1c90" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410771 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410860 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410910 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410923 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c"} Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410954 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.410977 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-c6c25" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411001 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411014 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411041 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411064 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411090 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411155 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411180 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411237 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411254 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411290 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411299 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411359 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411396 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411440 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411498 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411525 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:05.411504 master-0 kubenswrapper[33867]: I0219 03:24:05.411552 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411584 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411606 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411631 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411655 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411686 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411714 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411739 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411761 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411790 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-clndn" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411819 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411841 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411880 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411915 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411952 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411974 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.411997 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.412015 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:24:05.412853 master-0 kubenswrapper[33867]: I0219 03:24:05.412045 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.416637 33867 scope.go:117] "RemoveContainer" containerID="82a40f80e34c4f63706840b48b0aa48486b2ad68c13d50974f11a3442433c7ea" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.421725 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.421768 33867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.422892 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-clndn" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.423064 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.425491 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.425522 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 19 03:24:05.425677 master-0 kubenswrapper[33867]: I0219 03:24:05.425534 33867 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 19 03:24:05.426229 master-0 kubenswrapper[33867]: I0219 03:24:05.425951 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:05.426229 master-0 kubenswrapper[33867]: I0219 03:24:05.425982 33867 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 19 03:24:05.426229 master-0 kubenswrapper[33867]: I0219 03:24:05.425990 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk" Feb 19 03:24:05.429444 master-0 kubenswrapper[33867]: I0219 03:24:05.429407 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm" Feb 19 03:24:05.439657 master-0 kubenswrapper[33867]: I0219 03:24:05.439629 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.444495 master-0 kubenswrapper[33867]: I0219 03:24:05.444448 33867 scope.go:117] "RemoveContainer" containerID="d18413342a722838be3aeba368600d701226af1bb0655a2558eb4a099c9c2796" Feb 19 03:24:05.451472 master-0 kubenswrapper[33867]: I0219 03:24:05.451432 33867 scope.go:117] "RemoveContainer" containerID="987763106eeabe88cbdd191d01e6f39059ee96a02ef736bbdbea66f4d5635935" Feb 19 03:24:05.454219 master-0 kubenswrapper[33867]: I0219 03:24:05.453804 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.457286 master-0 kubenswrapper[33867]: I0219 03:24:05.457206 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:05.457286 master-0 kubenswrapper[33867]: I0219 03:24:05.457280 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460636 33867 scope.go:117] "RemoveContainer" containerID="882c525babc52c3119968e9793962f24892225613582692392aa79601c39660e" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460697 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pw7dx" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460755 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460767 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460785 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:05.461322 master-0 kubenswrapper[33867]: I0219 03:24:05.460907 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:05.465854 master-0 kubenswrapper[33867]: I0219 03:24:05.464824 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:05.467374 master-0 kubenswrapper[33867]: I0219 03:24:05.467003 33867 scope.go:117] "RemoveContainer" containerID="10ad446c5ae8d63affc8eb0bacbb20232d6d1b38bc9bc64c6e6df2fe6d1b6cfd" Feb 19 03:24:05.468274 master-0 kubenswrapper[33867]: I0219 03:24:05.468197 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q" Feb 19 03:24:05.470817 master-0 kubenswrapper[33867]: I0219 03:24:05.470765 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fab5bbd-672c-4e18-9c1e-438e2360bc54-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:05.476099 master-0 kubenswrapper[33867]: I0219 03:24:05.476052 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:05.484522 master-0 kubenswrapper[33867]: I0219 03:24:05.484446 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:24:05.485812 master-0 kubenswrapper[33867]: I0219 03:24:05.485756 33867 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 19 03:24:05.485869 master-0 kubenswrapper[33867]: I0219 03:24:05.485839 33867 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 19 03:24:05.690926 master-0 kubenswrapper[33867]: I0219 03:24:05.690606 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/4.log" Feb 19 03:24:05.690926 master-0 kubenswrapper[33867]: I0219 03:24:05.690690 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"aeb20459425d6e56ff76fe8610d9b7bc296dc4bb77f829bd840562c9d7c854da"} Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.696844 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/4.log" Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.696909 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e"} Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.699700 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/3.log" Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.699774 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"05984a2d8d506cf21317b4fba3c098505a7bf565afc81accb2002148e8d4e654"} Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.702569 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" event={"ID":"76470062-ab83-47ed-a669-deeb71996548","Type":"ContainerStarted","Data":"61fee059c9e2bd56d33dd50da5b9c16f889bf39375f48d7e6e6571b1bacdf905"} Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.704076 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/5.log" Feb 19 03:24:05.711028 master-0 kubenswrapper[33867]: I0219 03:24:05.704293 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"8479877be77adbc71e6fe6092f79a345e9de45377aef7f8623167410a6b8886d"} Feb 19 03:24:05.961120 master-0 kubenswrapper[33867]: I0219 03:24:05.961033 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Feb 19 03:24:05.967477 master-0 kubenswrapper[33867]: W0219 03:24:05.967404 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1ba0c261_497c_4236_8f14_98ce5c16af59.slice/crio-d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052 WatchSource:0}: Error finding container d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052: Status 404 returned error can't find the container with id d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052 Feb 19 03:24:06.462114 master-0 kubenswrapper[33867]: I0219 03:24:06.461991 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:06.466149 master-0 kubenswrapper[33867]: I0219 03:24:06.466048 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:06.466149 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:06.466149 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:06.466149 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:06.466482 master-0 kubenswrapper[33867]: I0219 03:24:06.466139 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:06.699152 master-0 kubenswrapper[33867]: I0219 03:24:06.699083 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=6.699065251 podStartE2EDuration="6.699065251s" podCreationTimestamp="2026-02-19 03:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:24:06.698769972 +0000 UTC m=+51.995440583" watchObservedRunningTime="2026-02-19 03:24:06.699065251 +0000 UTC m=+51.995735862" Feb 19 03:24:06.712751 master-0 kubenswrapper[33867]: I0219 03:24:06.712595 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"1ba0c261-497c-4236-8f14-98ce5c16af59","Type":"ContainerStarted","Data":"26b06eab1f94dd6261f000583e030e306cfda4b8f6001932aa21638d9dddc9ae"} Feb 19 03:24:06.712751 master-0 kubenswrapper[33867]: I0219 03:24:06.712706 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"1ba0c261-497c-4236-8f14-98ce5c16af59","Type":"ContainerStarted","Data":"d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052"} Feb 19 03:24:06.714883 master-0 kubenswrapper[33867]: I0219 03:24:06.714843 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/4.log" Feb 19 03:24:06.715089 master-0 kubenswrapper[33867]: I0219 03:24:06.715038 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"60673fbdf86ad6e5cef387de140f8b0d61be98c49e3e44595c75133088b13a2a"} Feb 19 03:24:06.877463 master-0 kubenswrapper[33867]: I0219 03:24:06.877240 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.877225521 podStartE2EDuration="1.877225521s" podCreationTimestamp="2026-02-19 03:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:24:06.875519343 +0000 UTC m=+52.172189954" watchObservedRunningTime="2026-02-19 03:24:06.877225521 +0000 UTC m=+52.173896122" Feb 19 03:24:07.252484 master-0 kubenswrapper[33867]: I0219 03:24:07.252420 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:07.464763 master-0 kubenswrapper[33867]: I0219 03:24:07.464685 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:07.464763 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:07.464763 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:07.464763 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:07.464763 master-0 kubenswrapper[33867]: I0219 03:24:07.464750 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:08.253488 master-0 kubenswrapper[33867]: I0219 03:24:08.253391 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:08.253488 master-0 kubenswrapper[33867]: I0219 03:24:08.253473 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:08.254613 master-0 kubenswrapper[33867]: I0219 03:24:08.254540 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:08.254718 master-0 kubenswrapper[33867]: I0219 03:24:08.254608 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:08.464271 master-0 kubenswrapper[33867]: I0219 03:24:08.464198 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:08.464271 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:08.464271 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:08.464271 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:08.464752 master-0 kubenswrapper[33867]: I0219 03:24:08.464351 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:09.255086 master-0 kubenswrapper[33867]: I0219 03:24:09.254896 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:09.255086 master-0 kubenswrapper[33867]: I0219 03:24:09.255040 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:09.274805 master-0 kubenswrapper[33867]: I0219 03:24:09.274750 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk" Feb 19 03:24:09.464106 master-0 kubenswrapper[33867]: I0219 03:24:09.464005 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:09.464106 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:09.464106 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:09.464106 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:09.464106 master-0 kubenswrapper[33867]: I0219 03:24:09.464090 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:10.464542 master-0 kubenswrapper[33867]: I0219 03:24:10.463988 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:10.464542 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:10.464542 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:10.464542 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:10.464542 master-0 kubenswrapper[33867]: I0219 03:24:10.464064 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:10.465536 master-0 kubenswrapper[33867]: I0219 03:24:10.465500 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:10.472214 master-0 kubenswrapper[33867]: I0219 03:24:10.472157 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-957b9456f-f5s8c" Feb 19 03:24:11.253932 master-0 kubenswrapper[33867]: I0219 03:24:11.253835 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:11.254332 master-0 kubenswrapper[33867]: I0219 03:24:11.253925 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:11.254332 master-0 kubenswrapper[33867]: I0219 03:24:11.254025 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:11.254332 master-0 kubenswrapper[33867]: I0219 03:24:11.253939 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:11.464027 master-0 kubenswrapper[33867]: I0219 03:24:11.463976 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:11.464027 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:11.464027 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:11.464027 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:11.464364 master-0 kubenswrapper[33867]: I0219 03:24:11.464044 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:11.584753 master-0 kubenswrapper[33867]: I0219 03:24:11.584597 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:24:11.929425 master-0 kubenswrapper[33867]: I0219 03:24:11.929369 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 19 03:24:11.929795 master-0 kubenswrapper[33867]: I0219 03:24:11.929726 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" containerID="cri-o://d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76" gracePeriod=5 Feb 19 03:24:12.464905 master-0 kubenswrapper[33867]: I0219 03:24:12.464835 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:12.464905 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:12.464905 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:12.464905 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:12.464905 master-0 kubenswrapper[33867]: I0219 03:24:12.464902 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:13.464600 master-0 kubenswrapper[33867]: I0219 03:24:13.464528 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:13.464600 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:13.464600 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:13.464600 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:13.465348 master-0 kubenswrapper[33867]: I0219 03:24:13.464612 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:14.252891 master-0 kubenswrapper[33867]: I0219 03:24:14.252804 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:14.252891 master-0 kubenswrapper[33867]: I0219 03:24:14.252864 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:14.253305 master-0 kubenswrapper[33867]: I0219 03:24:14.252912 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:14.253305 master-0 kubenswrapper[33867]: I0219 03:24:14.252946 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:14.253305 master-0 kubenswrapper[33867]: I0219 03:24:14.253006 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:14.254136 master-0 kubenswrapper[33867]: I0219 03:24:14.254065 33867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:24:14.254360 master-0 kubenswrapper[33867]: I0219 03:24:14.254156 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c" gracePeriod=30 Feb 19 03:24:14.285075 master-0 kubenswrapper[33867]: I0219 03:24:14.284976 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:56706->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:24:14.285308 master-0 kubenswrapper[33867]: I0219 03:24:14.285077 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:56706->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:24:14.318296 master-0 kubenswrapper[33867]: I0219 03:24:14.318194 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5t9dd" Feb 19 03:24:14.338072 master-0 kubenswrapper[33867]: I0219 03:24:14.338019 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9c2b" Feb 19 03:24:14.463687 master-0 kubenswrapper[33867]: I0219 03:24:14.463588 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:14.463687 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:14.463687 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:14.463687 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:14.464068 master-0 kubenswrapper[33867]: I0219 03:24:14.463689 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:14.609385 master-0 kubenswrapper[33867]: I0219 03:24:14.609313 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrcnx" Feb 19 03:24:14.783842 master-0 kubenswrapper[33867]: I0219 03:24:14.783715 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/7.log" Feb 19 03:24:14.784236 master-0 kubenswrapper[33867]: I0219 03:24:14.784196 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/6.log" Feb 19 03:24:14.784618 master-0 kubenswrapper[33867]: I0219 03:24:14.784582 33867 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c" exitCode=255 Feb 19 03:24:14.784864 master-0 kubenswrapper[33867]: I0219 03:24:14.784619 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c"} Feb 19 03:24:14.784864 master-0 kubenswrapper[33867]: I0219 03:24:14.784655 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af"} Feb 19 03:24:14.784864 master-0 kubenswrapper[33867]: I0219 03:24:14.784856 33867 scope.go:117] "RemoveContainer" containerID="92f46e7dc0dbfb5fb7a6786f646d184008d2d59c656dbe6e375ada74e2cfa239" Feb 19 03:24:14.785044 master-0 kubenswrapper[33867]: I0219 03:24:14.785014 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:15.223777 master-0 kubenswrapper[33867]: I0219 03:24:15.223722 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqnbc" Feb 19 03:24:15.462374 master-0 kubenswrapper[33867]: I0219 03:24:15.462234 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:15.467282 master-0 kubenswrapper[33867]: I0219 03:24:15.467192 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:15.467282 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:15.467282 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:15.467282 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:15.467535 master-0 kubenswrapper[33867]: I0219 03:24:15.467348 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:15.793704 master-0 kubenswrapper[33867]: I0219 03:24:15.793638 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/7.log" Feb 19 03:24:16.464574 master-0 kubenswrapper[33867]: I0219 03:24:16.464456 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:16.464574 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:16.464574 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:16.464574 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:16.464978 master-0 kubenswrapper[33867]: I0219 03:24:16.464595 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:17.089554 master-0 kubenswrapper[33867]: I0219 03:24:17.089500 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 19 03:24:17.090057 master-0 kubenswrapper[33867]: I0219 03:24:17.089597 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:24:17.162487 master-0 kubenswrapper[33867]: I0219 03:24:17.162436 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 19 03:24:17.162714 master-0 kubenswrapper[33867]: I0219 03:24:17.162527 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 19 03:24:17.162714 master-0 kubenswrapper[33867]: I0219 03:24:17.162563 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 19 03:24:17.162714 master-0 kubenswrapper[33867]: I0219 03:24:17.162594 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 19 03:24:17.162714 master-0 kubenswrapper[33867]: I0219 03:24:17.162656 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 19 03:24:17.163822 master-0 kubenswrapper[33867]: I0219 03:24:17.163775 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:17.163936 master-0 kubenswrapper[33867]: I0219 03:24:17.163832 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:17.167128 master-0 kubenswrapper[33867]: I0219 03:24:17.164679 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log" (OuterVolumeSpecName: "var-log") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:17.173969 master-0 kubenswrapper[33867]: I0219 03:24:17.173909 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests" (OuterVolumeSpecName: "manifests") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:17.178007 master-0 kubenswrapper[33867]: I0219 03:24:17.177927 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:24:17.264930 master-0 kubenswrapper[33867]: I0219 03:24:17.264858 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:17.264930 master-0 kubenswrapper[33867]: I0219 03:24:17.264914 33867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:17.264930 master-0 kubenswrapper[33867]: I0219 03:24:17.264926 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:17.264930 master-0 kubenswrapper[33867]: I0219 03:24:17.264939 33867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:17.265251 master-0 kubenswrapper[33867]: I0219 03:24:17.264950 33867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:24:17.463980 master-0 kubenswrapper[33867]: I0219 03:24:17.463818 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:17.463980 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:17.463980 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:17.463980 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:17.463980 master-0 kubenswrapper[33867]: I0219 03:24:17.463895 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:17.813595 master-0 kubenswrapper[33867]: I0219 03:24:17.813457 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 19 03:24:17.813595 master-0 kubenswrapper[33867]: I0219 03:24:17.813517 33867 generic.go:334] "Generic (PLEG): container finished" podID="5c4f5d60772fa42f26e9c219bffa62b9" containerID="d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76" exitCode=137 Feb 19 03:24:17.813823 master-0 kubenswrapper[33867]: I0219 03:24:17.813600 33867 scope.go:117] "RemoveContainer" containerID="d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76" Feb 19 03:24:17.813823 master-0 kubenswrapper[33867]: I0219 03:24:17.813624 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:24:17.829501 master-0 kubenswrapper[33867]: I0219 03:24:17.829471 33867 scope.go:117] "RemoveContainer" containerID="d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76" Feb 19 03:24:17.829985 master-0 kubenswrapper[33867]: E0219 03:24:17.829955 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76\": container with ID starting with d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76 not found: ID does not exist" containerID="d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76" Feb 19 03:24:17.830031 master-0 kubenswrapper[33867]: I0219 03:24:17.829988 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76"} err="failed to get container status \"d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76\": rpc error: code = NotFound desc = could not find container \"d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76\": container with ID starting with d7b443f06282b2fb6c1df006c38e55052829c560937b70e1f06d70abe77abb76 not found: ID does not exist" Feb 19 03:24:18.463490 master-0 kubenswrapper[33867]: I0219 03:24:18.463422 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:18.463490 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:18.463490 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:18.463490 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:18.464077 master-0 kubenswrapper[33867]: I0219 03:24:18.463489 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:18.964903 master-0 kubenswrapper[33867]: I0219 03:24:18.964844 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f5d60772fa42f26e9c219bffa62b9" path="/var/lib/kubelet/pods/5c4f5d60772fa42f26e9c219bffa62b9/volumes" Feb 19 03:24:19.464551 master-0 kubenswrapper[33867]: I0219 03:24:19.464469 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:19.464551 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:19.464551 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:19.464551 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:19.465487 master-0 kubenswrapper[33867]: I0219 03:24:19.464580 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:20.253925 master-0 kubenswrapper[33867]: I0219 03:24:20.253825 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:20.254196 master-0 kubenswrapper[33867]: I0219 03:24:20.253955 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:20.254401 master-0 kubenswrapper[33867]: I0219 03:24:20.253868 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:20.254629 master-0 kubenswrapper[33867]: I0219 03:24:20.254580 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:20.464928 master-0 kubenswrapper[33867]: I0219 03:24:20.464808 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:20.464928 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:20.464928 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:20.464928 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:20.466071 master-0 kubenswrapper[33867]: I0219 03:24:20.464956 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:21.465420 master-0 kubenswrapper[33867]: I0219 03:24:21.465354 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:21.465420 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:21.465420 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:21.465420 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:21.466838 master-0 kubenswrapper[33867]: I0219 03:24:21.466788 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:22.463924 master-0 kubenswrapper[33867]: I0219 03:24:22.463845 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:22.463924 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:22.463924 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:22.463924 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:22.464296 master-0 kubenswrapper[33867]: I0219 03:24:22.463948 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:23.254080 master-0 kubenswrapper[33867]: I0219 03:24:23.254017 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:23.254080 master-0 kubenswrapper[33867]: I0219 03:24:23.254051 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:23.254741 master-0 kubenswrapper[33867]: I0219 03:24:23.254083 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:23.254741 master-0 kubenswrapper[33867]: I0219 03:24:23.254111 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:23.463601 master-0 kubenswrapper[33867]: I0219 03:24:23.463552 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:23.463601 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:23.463601 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:23.463601 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:23.463907 master-0 kubenswrapper[33867]: I0219 03:24:23.463618 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:23.866465 master-0 kubenswrapper[33867]: I0219 03:24:23.866333 33867 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="eea9b0c6ce5430374ed8497b41ddc2add12c790b9231a25ef012e069c8a74ede" exitCode=0 Feb 19 03:24:23.866465 master-0 kubenswrapper[33867]: I0219 03:24:23.866434 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerDied","Data":"eea9b0c6ce5430374ed8497b41ddc2add12c790b9231a25ef012e069c8a74ede"} Feb 19 03:24:23.867081 master-0 kubenswrapper[33867]: I0219 03:24:23.866543 33867 scope.go:117] "RemoveContainer" containerID="eaa696773a18508c6c209d42ace51f1418a8f4dfe51b1543f829012e0cb65108" Feb 19 03:24:23.867992 master-0 kubenswrapper[33867]: I0219 03:24:23.867900 33867 scope.go:117] "RemoveContainer" containerID="eea9b0c6ce5430374ed8497b41ddc2add12c790b9231a25ef012e069c8a74ede" Feb 19 03:24:24.247142 master-0 kubenswrapper[33867]: I0219 03:24:24.247050 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:24.247142 master-0 kubenswrapper[33867]: I0219 03:24:24.247116 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:24.464194 master-0 kubenswrapper[33867]: I0219 03:24:24.463855 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:24.464194 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:24.464194 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:24.464194 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:24.464194 master-0 kubenswrapper[33867]: I0219 03:24:24.463935 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:24.876949 master-0 kubenswrapper[33867]: I0219 03:24:24.876896 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/3.log" Feb 19 03:24:24.879059 master-0 kubenswrapper[33867]: I0219 03:24:24.878966 33867 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8" exitCode=1 Feb 19 03:24:24.879059 master-0 kubenswrapper[33867]: I0219 03:24:24.879052 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerDied","Data":"165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8"} Feb 19 03:24:24.879365 master-0 kubenswrapper[33867]: I0219 03:24:24.879098 33867 scope.go:117] "RemoveContainer" containerID="eea9b0c6ce5430374ed8497b41ddc2add12c790b9231a25ef012e069c8a74ede" Feb 19 03:24:24.879697 master-0 kubenswrapper[33867]: I0219 03:24:24.879634 33867 scope.go:117] "RemoveContainer" containerID="165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8" Feb 19 03:24:24.880023 master-0 kubenswrapper[33867]: E0219 03:24:24.879947 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" Feb 19 03:24:24.893888 master-0 kubenswrapper[33867]: I0219 03:24:24.893798 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:24:25.465677 master-0 kubenswrapper[33867]: I0219 03:24:25.465343 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:25.465677 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:25.465677 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:25.465677 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:25.465677 master-0 kubenswrapper[33867]: I0219 03:24:25.465509 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:25.886842 master-0 kubenswrapper[33867]: I0219 03:24:25.886789 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/3.log" Feb 19 03:24:25.887721 master-0 kubenswrapper[33867]: I0219 03:24:25.887667 33867 scope.go:117] "RemoveContainer" containerID="165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8" Feb 19 03:24:25.888224 master-0 kubenswrapper[33867]: E0219 03:24:25.888170 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" Feb 19 03:24:26.254591 master-0 kubenswrapper[33867]: I0219 03:24:26.254513 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:26.254938 master-0 kubenswrapper[33867]: I0219 03:24:26.254615 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:26.254938 master-0 kubenswrapper[33867]: I0219 03:24:26.254701 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:24:26.254938 master-0 kubenswrapper[33867]: I0219 03:24:26.254884 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:24:26.255341 master-0 kubenswrapper[33867]: I0219 03:24:26.254988 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:26.256266 master-0 kubenswrapper[33867]: I0219 03:24:26.256160 33867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 03:24:26.256433 master-0 kubenswrapper[33867]: I0219 03:24:26.256295 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" containerID="cri-o://40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af" gracePeriod=30 Feb 19 03:24:26.267479 master-0 kubenswrapper[33867]: I0219 03:24:26.267417 33867 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-zn8c7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:52416->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 19 03:24:26.267612 master-0 kubenswrapper[33867]: I0219 03:24:26.267489 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:52416->10.128.0.19:8443: read: connection reset by peer" Feb 19 03:24:26.372753 master-0 kubenswrapper[33867]: E0219 03:24:26.372704 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:24:26.463589 master-0 kubenswrapper[33867]: I0219 03:24:26.463507 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:26.463589 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:26.463589 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:26.463589 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:26.463589 master-0 kubenswrapper[33867]: I0219 03:24:26.463585 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:26.895037 master-0 kubenswrapper[33867]: I0219 03:24:26.894972 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/8.log" Feb 19 03:24:26.895839 master-0 kubenswrapper[33867]: I0219 03:24:26.895548 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/7.log" Feb 19 03:24:26.896065 master-0 kubenswrapper[33867]: I0219 03:24:26.896009 33867 generic.go:334] "Generic (PLEG): container finished" podID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" containerID="40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af" exitCode=255 Feb 19 03:24:26.896212 master-0 kubenswrapper[33867]: I0219 03:24:26.896166 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerDied","Data":"40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af"} Feb 19 03:24:26.896322 master-0 kubenswrapper[33867]: I0219 03:24:26.896242 33867 scope.go:117] "RemoveContainer" containerID="49c7665c4d41e6363db83e2cfb07cabc6e73e095d070fa83da535345692dab7c" Feb 19 03:24:26.897002 master-0 kubenswrapper[33867]: I0219 03:24:26.896967 33867 scope.go:117] "RemoveContainer" containerID="40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af" Feb 19 03:24:26.899273 master-0 kubenswrapper[33867]: E0219 03:24:26.899191 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:24:27.464733 master-0 kubenswrapper[33867]: I0219 03:24:27.464672 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:27.464733 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:27.464733 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:27.464733 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:27.464733 master-0 kubenswrapper[33867]: I0219 03:24:27.464744 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:27.906927 master-0 kubenswrapper[33867]: I0219 03:24:27.906798 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/8.log" Feb 19 03:24:28.467718 master-0 kubenswrapper[33867]: I0219 03:24:28.467614 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:28.467718 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:28.467718 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:28.467718 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:28.468065 master-0 kubenswrapper[33867]: I0219 03:24:28.467727 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:29.464782 master-0 kubenswrapper[33867]: I0219 03:24:29.464689 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:29.464782 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:29.464782 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:29.464782 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:29.465770 master-0 kubenswrapper[33867]: I0219 03:24:29.464788 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:30.464222 master-0 kubenswrapper[33867]: I0219 03:24:30.464143 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:30.464222 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:30.464222 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:30.464222 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:30.465733 master-0 kubenswrapper[33867]: I0219 03:24:30.464222 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:31.464656 master-0 kubenswrapper[33867]: I0219 03:24:31.464534 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:31.464656 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:31.464656 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:31.464656 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:31.464656 master-0 kubenswrapper[33867]: I0219 03:24:31.464650 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:32.465692 master-0 kubenswrapper[33867]: I0219 03:24:32.465584 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:32.465692 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:32.465692 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:32.465692 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:32.466693 master-0 kubenswrapper[33867]: I0219 03:24:32.465707 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:33.464713 master-0 kubenswrapper[33867]: I0219 03:24:33.464614 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:33.464713 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:33.464713 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:33.464713 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:33.464713 master-0 kubenswrapper[33867]: I0219 03:24:33.464692 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:34.246669 master-0 kubenswrapper[33867]: I0219 03:24:34.246568 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:34.246669 master-0 kubenswrapper[33867]: I0219 03:24:34.246686 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:34.247608 master-0 kubenswrapper[33867]: I0219 03:24:34.247518 33867 scope.go:117] "RemoveContainer" containerID="165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8" Feb 19 03:24:34.463233 master-0 kubenswrapper[33867]: I0219 03:24:34.463187 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:34.463233 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:34.463233 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:34.463233 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:34.463577 master-0 kubenswrapper[33867]: I0219 03:24:34.463243 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:35.557403 master-0 kubenswrapper[33867]: I0219 03:24:35.557243 33867 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-r7r6p container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": read tcp 10.128.0.2:53592->10.128.0.22:8443: read: connection reset by peer" start-of-body= Feb 19 03:24:35.558193 master-0 kubenswrapper[33867]: I0219 03:24:35.557438 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": read tcp 10.128.0.2:53592->10.128.0.22:8443: read: connection reset by peer" Feb 19 03:24:35.599454 master-0 kubenswrapper[33867]: I0219 03:24:35.590369 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:35.599454 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:35.599454 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:35.599454 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:35.599454 master-0 kubenswrapper[33867]: I0219 03:24:35.590445 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:35.607618 master-0 kubenswrapper[33867]: I0219 03:24:35.607575 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/3.log" Feb 19 03:24:35.618127 master-0 kubenswrapper[33867]: I0219 03:24:35.618054 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae"} Feb 19 03:24:35.619337 master-0 kubenswrapper[33867]: I0219 03:24:35.618769 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:35.619460 master-0 kubenswrapper[33867]: I0219 03:24:35.619419 33867 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-xxdh5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 19 03:24:35.619525 master-0 kubenswrapper[33867]: I0219 03:24:35.619474 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 19 03:24:35.751112 master-0 kubenswrapper[33867]: E0219 03:24:35.751058 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f325fb_0075_4a18_ba7e_669ab19bc91a.slice/crio-conmon-8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:24:36.463657 master-0 kubenswrapper[33867]: I0219 03:24:36.463559 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:36.463657 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:36.463657 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:36.463657 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:36.464541 master-0 kubenswrapper[33867]: I0219 03:24:36.464463 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:36.620552 master-0 kubenswrapper[33867]: I0219 03:24:36.620364 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/5.log" Feb 19 03:24:36.621676 master-0 kubenswrapper[33867]: I0219 03:24:36.621138 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/4.log" Feb 19 03:24:36.621676 master-0 kubenswrapper[33867]: I0219 03:24:36.621205 33867 generic.go:334] "Generic (PLEG): container finished" podID="3edc7410-417a-4e55-9276-ac271fd52297" containerID="aeb20459425d6e56ff76fe8610d9b7bc296dc4bb77f829bd840562c9d7c854da" exitCode=255 Feb 19 03:24:36.621676 master-0 kubenswrapper[33867]: I0219 03:24:36.621319 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerDied","Data":"aeb20459425d6e56ff76fe8610d9b7bc296dc4bb77f829bd840562c9d7c854da"} Feb 19 03:24:36.621676 master-0 kubenswrapper[33867]: I0219 03:24:36.621370 33867 scope.go:117] "RemoveContainer" containerID="19a1f28fd6894887f54799dd664b3153aee457ecc2c8aab80e319ccb1bdbf8a2" Feb 19 03:24:36.622209 master-0 kubenswrapper[33867]: I0219 03:24:36.622167 33867 scope.go:117] "RemoveContainer" containerID="aeb20459425d6e56ff76fe8610d9b7bc296dc4bb77f829bd840562c9d7c854da" Feb 19 03:24:36.622884 master-0 kubenswrapper[33867]: E0219 03:24:36.622446 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-c48c8bf7c-f7fvc_openshift-service-ca-operator(3edc7410-417a-4e55-9276-ac271fd52297)\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" podUID="3edc7410-417a-4e55-9276-ac271fd52297" Feb 19 03:24:36.629096 master-0 kubenswrapper[33867]: I0219 03:24:36.628877 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/5.log" Feb 19 03:24:36.629945 master-0 kubenswrapper[33867]: I0219 03:24:36.629899 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/4.log" Feb 19 03:24:36.630031 master-0 kubenswrapper[33867]: I0219 03:24:36.629973 33867 generic.go:334] "Generic (PLEG): container finished" podID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" containerID="8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e" exitCode=1 Feb 19 03:24:36.630107 master-0 kubenswrapper[33867]: I0219 03:24:36.630070 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerDied","Data":"8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e"} Feb 19 03:24:36.630903 master-0 kubenswrapper[33867]: I0219 03:24:36.630844 33867 scope.go:117] "RemoveContainer" containerID="8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e" Feb 19 03:24:36.631228 master-0 kubenswrapper[33867]: E0219 03:24:36.631176 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" podUID="c8f325fb-0075-4a18-ba7e-669ab19bc91a" Feb 19 03:24:36.635161 master-0 kubenswrapper[33867]: I0219 03:24:36.635099 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/4.log" Feb 19 03:24:36.635824 master-0 kubenswrapper[33867]: I0219 03:24:36.635781 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/3.log" Feb 19 03:24:36.635925 master-0 kubenswrapper[33867]: I0219 03:24:36.635835 33867 generic.go:334] "Generic (PLEG): container finished" podID="58c6f5a2-c0a8-4636-a057-cedbe0151579" containerID="afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae" exitCode=1 Feb 19 03:24:36.635984 master-0 kubenswrapper[33867]: I0219 03:24:36.635929 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerDied","Data":"afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae"} Feb 19 03:24:36.636749 master-0 kubenswrapper[33867]: I0219 03:24:36.636687 33867 scope.go:117] "RemoveContainer" containerID="afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae" Feb 19 03:24:36.637157 master-0 kubenswrapper[33867]: E0219 03:24:36.637094 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" Feb 19 03:24:36.638823 master-0 kubenswrapper[33867]: I0219 03:24:36.638750 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/4.log" Feb 19 03:24:36.639591 master-0 kubenswrapper[33867]: I0219 03:24:36.639534 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/3.log" Feb 19 03:24:36.639695 master-0 kubenswrapper[33867]: I0219 03:24:36.639641 33867 generic.go:334] "Generic (PLEG): container finished" podID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" containerID="05984a2d8d506cf21317b4fba3c098505a7bf565afc81accb2002148e8d4e654" exitCode=255 Feb 19 03:24:36.639837 master-0 kubenswrapper[33867]: I0219 03:24:36.639774 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerDied","Data":"05984a2d8d506cf21317b4fba3c098505a7bf565afc81accb2002148e8d4e654"} Feb 19 03:24:36.640580 master-0 kubenswrapper[33867]: I0219 03:24:36.640524 33867 scope.go:117] "RemoveContainer" containerID="05984a2d8d506cf21317b4fba3c098505a7bf565afc81accb2002148e8d4e654" Feb 19 03:24:36.640952 master-0 kubenswrapper[33867]: E0219 03:24:36.640896 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-fc889cfd5-866f9_openshift-kube-storage-version-migrator-operator(2b9d54aa-5f71-4a82-8e71-401ed3083a13)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" podUID="2b9d54aa-5f71-4a82-8e71-401ed3083a13" Feb 19 03:24:36.643575 master-0 kubenswrapper[33867]: I0219 03:24:36.643519 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/5.log" Feb 19 03:24:36.644462 master-0 kubenswrapper[33867]: I0219 03:24:36.644408 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/4.log" Feb 19 03:24:36.644532 master-0 kubenswrapper[33867]: I0219 03:24:36.644504 33867 generic.go:334] "Generic (PLEG): container finished" podID="4714ef51-2d24-4938-8c58-80c1485a368b" containerID="60673fbdf86ad6e5cef387de140f8b0d61be98c49e3e44595c75133088b13a2a" exitCode=255 Feb 19 03:24:36.644733 master-0 kubenswrapper[33867]: I0219 03:24:36.644627 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerDied","Data":"60673fbdf86ad6e5cef387de140f8b0d61be98c49e3e44595c75133088b13a2a"} Feb 19 03:24:36.645415 master-0 kubenswrapper[33867]: I0219 03:24:36.645374 33867 scope.go:117] "RemoveContainer" containerID="60673fbdf86ad6e5cef387de140f8b0d61be98c49e3e44595c75133088b13a2a" Feb 19 03:24:36.645770 master-0 kubenswrapper[33867]: E0219 03:24:36.645724 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-5d87bf58c-lbfvq_openshift-kube-apiserver-operator(4714ef51-2d24-4938-8c58-80c1485a368b)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" podUID="4714ef51-2d24-4938-8c58-80c1485a368b" Feb 19 03:24:36.652706 master-0 kubenswrapper[33867]: I0219 03:24:36.652637 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/6.log" Feb 19 03:24:36.653623 master-0 kubenswrapper[33867]: I0219 03:24:36.653538 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/5.log" Feb 19 03:24:36.653729 master-0 kubenswrapper[33867]: I0219 03:24:36.653650 33867 generic.go:334] "Generic (PLEG): container finished" podID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" containerID="8479877be77adbc71e6fe6092f79a345e9de45377aef7f8623167410a6b8886d" exitCode=255 Feb 19 03:24:36.653867 master-0 kubenswrapper[33867]: I0219 03:24:36.653730 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerDied","Data":"8479877be77adbc71e6fe6092f79a345e9de45377aef7f8623167410a6b8886d"} Feb 19 03:24:36.654877 master-0 kubenswrapper[33867]: I0219 03:24:36.654824 33867 scope.go:117] "RemoveContainer" containerID="8479877be77adbc71e6fe6092f79a345e9de45377aef7f8623167410a6b8886d" Feb 19 03:24:36.655304 master-0 kubenswrapper[33867]: E0219 03:24:36.655233 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" podUID="4c3267e5-390a-40a3-bff8-1d1d81fb9a17" Feb 19 03:24:36.684527 master-0 kubenswrapper[33867]: I0219 03:24:36.684475 33867 scope.go:117] "RemoveContainer" containerID="7451979a94f80aee54e0563ac7f58d005b0131fa01c9b6d07669dbdfc4734cf2" Feb 19 03:24:36.729649 master-0 kubenswrapper[33867]: I0219 03:24:36.729607 33867 scope.go:117] "RemoveContainer" containerID="165e68b9aa635fa44c856928a294980bd5100d9cd22dc5480cec4e2f00e4e5a8" Feb 19 03:24:36.769171 master-0 kubenswrapper[33867]: I0219 03:24:36.769123 33867 scope.go:117] "RemoveContainer" containerID="e103e135bf82f2eb93c3dbb2b40a81ffeb2314273026f2e9a0c0e8f111555646" Feb 19 03:24:36.807524 master-0 kubenswrapper[33867]: I0219 03:24:36.807475 33867 scope.go:117] "RemoveContainer" containerID="987763106eeabe88cbdd191d01e6f39059ee96a02ef736bbdbea66f4d5635935" Feb 19 03:24:36.844131 master-0 kubenswrapper[33867]: I0219 03:24:36.843989 33867 scope.go:117] "RemoveContainer" containerID="028495f0aee3ee18d27a6df8f41026b434ac3c3d335cf96c6e2e88bafe3758a1" Feb 19 03:24:37.463684 master-0 kubenswrapper[33867]: I0219 03:24:37.463585 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:37.463684 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:37.463684 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:37.463684 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:37.463684 master-0 kubenswrapper[33867]: I0219 03:24:37.463671 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:37.663237 master-0 kubenswrapper[33867]: I0219 03:24:37.663204 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/5.log" Feb 19 03:24:37.666010 master-0 kubenswrapper[33867]: I0219 03:24:37.665978 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/5.log" Feb 19 03:24:37.667753 master-0 kubenswrapper[33867]: I0219 03:24:37.667727 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/6.log" Feb 19 03:24:37.669988 master-0 kubenswrapper[33867]: I0219 03:24:37.669948 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/5.log" Feb 19 03:24:37.672427 master-0 kubenswrapper[33867]: I0219 03:24:37.672378 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/4.log" Feb 19 03:24:37.672858 master-0 kubenswrapper[33867]: I0219 03:24:37.672829 33867 scope.go:117] "RemoveContainer" containerID="afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae" Feb 19 03:24:37.673082 master-0 kubenswrapper[33867]: E0219 03:24:37.673049 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" Feb 19 03:24:37.677021 master-0 kubenswrapper[33867]: I0219 03:24:37.676978 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/4.log" Feb 19 03:24:38.463480 master-0 kubenswrapper[33867]: I0219 03:24:38.463412 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:38.463480 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:38.463480 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:38.463480 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:38.463929 master-0 kubenswrapper[33867]: I0219 03:24:38.463500 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:39.464820 master-0 kubenswrapper[33867]: I0219 03:24:39.464681 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:39.464820 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:39.464820 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:39.464820 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:39.465876 master-0 kubenswrapper[33867]: I0219 03:24:39.464886 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:39.955623 master-0 kubenswrapper[33867]: I0219 03:24:39.955553 33867 scope.go:117] "RemoveContainer" containerID="40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af" Feb 19 03:24:39.956144 master-0 kubenswrapper[33867]: E0219 03:24:39.955948 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" podUID="78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda" Feb 19 03:24:40.464053 master-0 kubenswrapper[33867]: I0219 03:24:40.463981 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:40.464053 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:40.464053 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:40.464053 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:40.464353 master-0 kubenswrapper[33867]: I0219 03:24:40.464074 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:41.464284 master-0 kubenswrapper[33867]: I0219 03:24:41.464146 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:41.464284 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:41.464284 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:41.464284 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:41.465546 master-0 kubenswrapper[33867]: I0219 03:24:41.464291 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:42.465011 master-0 kubenswrapper[33867]: I0219 03:24:42.464896 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:42.465011 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:42.465011 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:42.465011 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:42.465011 master-0 kubenswrapper[33867]: I0219 03:24:42.464982 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:43.464293 master-0 kubenswrapper[33867]: I0219 03:24:43.464195 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:43.464293 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:43.464293 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:43.464293 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:43.464598 master-0 kubenswrapper[33867]: I0219 03:24:43.464414 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:44.247318 master-0 kubenswrapper[33867]: I0219 03:24:44.247210 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:44.248406 master-0 kubenswrapper[33867]: I0219 03:24:44.248248 33867 scope.go:117] "RemoveContainer" containerID="afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae" Feb 19 03:24:44.248982 master-0 kubenswrapper[33867]: E0219 03:24:44.248714 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" podUID="58c6f5a2-c0a8-4636-a057-cedbe0151579" Feb 19 03:24:44.464095 master-0 kubenswrapper[33867]: I0219 03:24:44.464028 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:44.464095 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:44.464095 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:44.464095 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:44.464527 master-0 kubenswrapper[33867]: I0219 03:24:44.464111 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:45.464283 master-0 kubenswrapper[33867]: I0219 03:24:45.464179 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:45.464283 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:45.464283 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:45.464283 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:45.464867 master-0 kubenswrapper[33867]: I0219 03:24:45.464336 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:46.465065 master-0 kubenswrapper[33867]: I0219 03:24:46.464844 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:46.465065 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:46.465065 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:46.465065 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:46.465065 master-0 kubenswrapper[33867]: I0219 03:24:46.464948 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:47.464297 master-0 kubenswrapper[33867]: I0219 03:24:47.464204 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:47.464297 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:47.464297 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:47.464297 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:47.464600 master-0 kubenswrapper[33867]: I0219 03:24:47.464350 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:47.955830 master-0 kubenswrapper[33867]: I0219 03:24:47.955474 33867 scope.go:117] "RemoveContainer" containerID="60673fbdf86ad6e5cef387de140f8b0d61be98c49e3e44595c75133088b13a2a" Feb 19 03:24:48.079468 master-0 kubenswrapper[33867]: I0219 03:24:48.079402 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 19 03:24:48.079731 master-0 kubenswrapper[33867]: E0219 03:24:48.079701 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:24:48.079731 master-0 kubenswrapper[33867]: I0219 03:24:48.079727 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:24:48.079833 master-0 kubenswrapper[33867]: E0219 03:24:48.079781 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:24:48.079833 master-0 kubenswrapper[33867]: I0219 03:24:48.079792 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:24:48.079833 master-0 kubenswrapper[33867]: E0219 03:24:48.079806 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:24:48.079833 master-0 kubenswrapper[33867]: I0219 03:24:48.079817 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:24:48.079833 master-0 kubenswrapper[33867]: E0219 03:24:48.079837 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079845 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079866 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079874 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079883 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079890 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079906 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079914 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079925 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079932 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079947 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079955 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079972 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.079980 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.079998 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.080006 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.080020 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.080049 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: E0219 03:24:48.080072 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:24:48.080067 master-0 kubenswrapper[33867]: I0219 03:24:48.080081 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: E0219 03:24:48.080098 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080106 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: E0219 03:24:48.080121 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080129 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: E0219 03:24:48.080142 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080149 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080326 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2561caa0-5f79-496e-8fa7-a9692dca20be" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080381 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="402778fb-ac93-4d3a-bc4e-7416c49a4061" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080403 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e244dcb-df20-4a7c-bc0a-14ba63c54a9f" containerName="assisted-installer-controller" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080421 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bddb3a1-41bd-4314-bfb0-3c72ca14200f" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080437 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fab5bbd-672c-4e18-9c1e-438e2360bc54" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080455 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f3b8a5-a045-4023-80f8-0d4d297102ab" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080464 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080485 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080504 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080515 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76869b6-0f20-4ed7-a35c-9e4ba1fe58f5" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080529 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080538 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b05aeb-22a8-4008-a582-072f63cc46bf" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080556 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080572 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" containerName="collect-profiles" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080583 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3" containerName="installer" Feb 19 03:24:48.080723 master-0 kubenswrapper[33867]: I0219 03:24:48.080592 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d9bbbb-77bd-4978-9f37-d3c54b780fbf" containerName="installer" Feb 19 03:24:48.081378 master-0 kubenswrapper[33867]: I0219 03:24:48.081079 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.086414 master-0 kubenswrapper[33867]: I0219 03:24:48.086365 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 03:24:48.086591 master-0 kubenswrapper[33867]: I0219 03:24:48.086553 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dcb4l" Feb 19 03:24:48.263943 master-0 kubenswrapper[33867]: I0219 03:24:48.263693 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.263943 master-0 kubenswrapper[33867]: I0219 03:24:48.263900 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.264340 master-0 kubenswrapper[33867]: I0219 03:24:48.263986 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.364736 master-0 kubenswrapper[33867]: I0219 03:24:48.364664 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.364736 master-0 kubenswrapper[33867]: I0219 03:24:48.364736 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.364973 master-0 kubenswrapper[33867]: I0219 03:24:48.364803 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.365008 master-0 kubenswrapper[33867]: I0219 03:24:48.364948 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.365198 master-0 kubenswrapper[33867]: I0219 03:24:48.365155 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.378647 master-0 kubenswrapper[33867]: I0219 03:24:48.378410 33867 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 03:24:48.383721 master-0 kubenswrapper[33867]: I0219 03:24:48.383696 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.403406 master-0 kubenswrapper[33867]: I0219 03:24:48.403354 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:24:48.491344 master-0 kubenswrapper[33867]: I0219 03:24:48.490534 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:48.491344 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:48.491344 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:48.491344 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:48.491344 master-0 kubenswrapper[33867]: I0219 03:24:48.490599 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:48.523164 master-0 kubenswrapper[33867]: I0219 03:24:48.523040 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 19 03:24:48.772119 master-0 kubenswrapper[33867]: I0219 03:24:48.772080 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/5.log" Feb 19 03:24:48.772344 master-0 kubenswrapper[33867]: I0219 03:24:48.772147 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq" event={"ID":"4714ef51-2d24-4938-8c58-80c1485a368b","Type":"ContainerStarted","Data":"70ebf408e853484cb525463cefa2c8a8f275b664202b88d9683d5bb2fc21f13d"} Feb 19 03:24:48.956071 master-0 kubenswrapper[33867]: I0219 03:24:48.955997 33867 scope.go:117] "RemoveContainer" containerID="05984a2d8d506cf21317b4fba3c098505a7bf565afc81accb2002148e8d4e654" Feb 19 03:24:49.050673 master-0 kubenswrapper[33867]: I0219 03:24:49.050510 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 19 03:24:49.463735 master-0 kubenswrapper[33867]: I0219 03:24:49.463639 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:49.463735 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:49.463735 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:49.463735 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:49.463735 master-0 kubenswrapper[33867]: I0219 03:24:49.463748 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:49.788668 master-0 kubenswrapper[33867]: I0219 03:24:49.788569 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c569efe9-6db4-4082-8be0-4391ab4a88a8","Type":"ContainerStarted","Data":"e5774f3d03332c20a101a37ad4ef6d8dc75f134cf3324eae2691a83c47039693"} Feb 19 03:24:49.788668 master-0 kubenswrapper[33867]: I0219 03:24:49.788675 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c569efe9-6db4-4082-8be0-4391ab4a88a8","Type":"ContainerStarted","Data":"dc0602e36f88751d57eb01d5f0acbd191ef2cd752fe75323b3efb8eb76fabffb"} Feb 19 03:24:49.791151 master-0 kubenswrapper[33867]: I0219 03:24:49.791084 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-866f9_2b9d54aa-5f71-4a82-8e71-401ed3083a13/kube-storage-version-migrator-operator/4.log" Feb 19 03:24:49.791303 master-0 kubenswrapper[33867]: I0219 03:24:49.791177 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9" event={"ID":"2b9d54aa-5f71-4a82-8e71-401ed3083a13","Type":"ContainerStarted","Data":"b6c6b78dc9c3510f49fe8279e603a17d1158d846db36b570488a96f4e0cb582d"} Feb 19 03:24:49.814841 master-0 kubenswrapper[33867]: I0219 03:24:49.814717 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=1.814564973 podStartE2EDuration="1.814564973s" podCreationTimestamp="2026-02-19 03:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:24:49.810306083 +0000 UTC m=+95.106976704" watchObservedRunningTime="2026-02-19 03:24:49.814564973 +0000 UTC m=+95.111235594" Feb 19 03:24:49.954984 master-0 kubenswrapper[33867]: I0219 03:24:49.954931 33867 scope.go:117] "RemoveContainer" containerID="aeb20459425d6e56ff76fe8610d9b7bc296dc4bb77f829bd840562c9d7c854da" Feb 19 03:24:50.465471 master-0 kubenswrapper[33867]: I0219 03:24:50.465378 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:50.465471 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:50.465471 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:50.465471 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:50.466564 master-0 kubenswrapper[33867]: I0219 03:24:50.465482 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:50.800439 master-0 kubenswrapper[33867]: I0219 03:24:50.800333 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-f7fvc_3edc7410-417a-4e55-9276-ac271fd52297/service-ca-operator/5.log" Feb 19 03:24:50.800623 master-0 kubenswrapper[33867]: I0219 03:24:50.800431 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc" event={"ID":"3edc7410-417a-4e55-9276-ac271fd52297","Type":"ContainerStarted","Data":"49e9d6c050bc227ef228d2c7c26f186f3b61e0da51cd8e88b62ac1a88b4f4896"} Feb 19 03:24:51.463299 master-0 kubenswrapper[33867]: I0219 03:24:51.463217 33867 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-t6jnq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 03:24:51.463299 master-0 kubenswrapper[33867]: [-]has-synced failed: reason withheld Feb 19 03:24:51.463299 master-0 kubenswrapper[33867]: [+]process-running ok Feb 19 03:24:51.463299 master-0 kubenswrapper[33867]: healthz check failed Feb 19 03:24:51.463578 master-0 kubenswrapper[33867]: I0219 03:24:51.463318 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" podUID="76470062-ab83-47ed-a669-deeb71996548" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 03:24:51.956403 master-0 kubenswrapper[33867]: I0219 03:24:51.956034 33867 scope.go:117] "RemoveContainer" containerID="8681ed603d322e009b6773267822dd68fda90284bc8b84a922c75c8001135b1e" Feb 19 03:24:51.956403 master-0 kubenswrapper[33867]: I0219 03:24:51.956385 33867 scope.go:117] "RemoveContainer" containerID="8479877be77adbc71e6fe6092f79a345e9de45377aef7f8623167410a6b8886d" Feb 19 03:24:52.464391 master-0 kubenswrapper[33867]: I0219 03:24:52.464315 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:52.467441 master-0 kubenswrapper[33867]: I0219 03:24:52.467358 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-t6jnq" Feb 19 03:24:52.821482 master-0 kubenswrapper[33867]: I0219 03:24:52.821447 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/6.log" Feb 19 03:24:52.821898 master-0 kubenswrapper[33867]: I0219 03:24:52.821871 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p" event={"ID":"4c3267e5-390a-40a3-bff8-1d1d81fb9a17","Type":"ContainerStarted","Data":"e72ce83b286656b8e70e60f82ae2fb35254bbc400683ad93fcb7c7c53d0db174"} Feb 19 03:24:52.828358 master-0 kubenswrapper[33867]: I0219 03:24:52.826457 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/5.log" Feb 19 03:24:52.828358 master-0 kubenswrapper[33867]: I0219 03:24:52.826800 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd" event={"ID":"c8f325fb-0075-4a18-ba7e-669ab19bc91a","Type":"ContainerStarted","Data":"82c7fe59c3c1c2bcd4ed68b1953d144de1474c9ae108d43575773a1db21bd9b0"} Feb 19 03:24:52.955340 master-0 kubenswrapper[33867]: I0219 03:24:52.955281 33867 scope.go:117] "RemoveContainer" containerID="40b1b8ea5ecf3acc2dc4769e949348825f71d700d4b8c2e818f22aa6152344af" Feb 19 03:24:53.837050 master-0 kubenswrapper[33867]: I0219 03:24:53.836994 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/8.log" Feb 19 03:24:53.837902 master-0 kubenswrapper[33867]: I0219 03:24:53.837524 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" event={"ID":"78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda","Type":"ContainerStarted","Data":"3ae0450949fce77fcb393dd80a15ed77a73637f1cce2c7b415641fe406861443"} Feb 19 03:24:53.837980 master-0 kubenswrapper[33867]: I0219 03:24:53.837919 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:56.296118 master-0 kubenswrapper[33867]: I0219 03:24:56.296001 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:24:56.297346 master-0 kubenswrapper[33867]: I0219 03:24:56.296288 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" podUID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" containerName="metrics-server" containerID="cri-o://0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c" gracePeriod=170 Feb 19 03:24:57.955636 master-0 kubenswrapper[33867]: I0219 03:24:57.955553 33867 scope.go:117] "RemoveContainer" containerID="afd2245287eaa516d3998b9d711e72c848c4621bebc6ab7354b1c3a2aaf8ecae" Feb 19 03:24:58.261911 master-0 kubenswrapper[33867]: I0219 03:24:58.261773 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7" Feb 19 03:24:58.873966 master-0 kubenswrapper[33867]: I0219 03:24:58.873917 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-6f5488b997-xxdh5_58c6f5a2-c0a8-4636-a057-cedbe0151579/marketplace-operator/4.log" Feb 19 03:24:58.873966 master-0 kubenswrapper[33867]: I0219 03:24:58.873980 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" event={"ID":"58c6f5a2-c0a8-4636-a057-cedbe0151579","Type":"ContainerStarted","Data":"7205b0847053916b50fff247bc9623882530c8e35e67adb9654fb6f444c307e9"} Feb 19 03:24:58.874472 master-0 kubenswrapper[33867]: I0219 03:24:58.874422 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:24:58.876219 master-0 kubenswrapper[33867]: I0219 03:24:58.876177 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-xxdh5" Feb 19 03:25:07.952516 master-0 kubenswrapper[33867]: I0219 03:25:07.952423 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/2.log" Feb 19 03:25:07.953782 master-0 kubenswrapper[33867]: I0219 03:25:07.953381 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/1.log" Feb 19 03:25:07.954335 master-0 kubenswrapper[33867]: I0219 03:25:07.954252 33867 generic.go:334] "Generic (PLEG): container finished" podID="98ac5423-b231-44e5-9545-424d635ed6ee" containerID="d535f4c1585c1d5454f99de091b8d7476f2719a79c5de2bb6c941b4ff5a83bb5" exitCode=1 Feb 19 03:25:07.954415 master-0 kubenswrapper[33867]: I0219 03:25:07.954320 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerDied","Data":"d535f4c1585c1d5454f99de091b8d7476f2719a79c5de2bb6c941b4ff5a83bb5"} Feb 19 03:25:07.954415 master-0 kubenswrapper[33867]: I0219 03:25:07.954380 33867 scope.go:117] "RemoveContainer" containerID="4eaad01f93ee8b4305631434a093be13923a43fc42e41b75e5ee71770a4807d1" Feb 19 03:25:07.955822 master-0 kubenswrapper[33867]: I0219 03:25:07.955782 33867 scope.go:117] "RemoveContainer" containerID="d535f4c1585c1d5454f99de091b8d7476f2719a79c5de2bb6c941b4ff5a83bb5" Feb 19 03:25:08.962180 master-0 kubenswrapper[33867]: I0219 03:25:08.962150 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tbg8_98ac5423-b231-44e5-9545-424d635ed6ee/package-server-manager/2.log" Feb 19 03:25:08.963499 master-0 kubenswrapper[33867]: I0219 03:25:08.963421 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" event={"ID":"98ac5423-b231-44e5-9545-424d635ed6ee","Type":"ContainerStarted","Data":"66751bde26d5bb8706555e2c7833e2f8f379047c8ae3ffa3371448feadae3738"} Feb 19 03:25:08.964029 master-0 kubenswrapper[33867]: I0219 03:25:08.963983 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:25:21.834356 master-0 kubenswrapper[33867]: I0219 03:25:21.834203 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-66b5846d67-vlng5"] Feb 19 03:25:21.836503 master-0 kubenswrapper[33867]: I0219 03:25:21.836245 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.841235 master-0 kubenswrapper[33867]: I0219 03:25:21.841169 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2h6in0gl25gpf" Feb 19 03:25:21.843184 master-0 kubenswrapper[33867]: I0219 03:25:21.842769 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zkwlh"] Feb 19 03:25:21.852430 master-0 kubenswrapper[33867]: I0219 03:25:21.852174 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6df4d685bd-g7b8m"] Feb 19 03:25:21.853522 master-0 kubenswrapper[33867]: I0219 03:25:21.853506 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:21.858556 master-0 kubenswrapper[33867]: I0219 03:25:21.857928 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-496bn" Feb 19 03:25:21.860029 master-0 kubenswrapper[33867]: I0219 03:25:21.859996 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 03:25:21.867381 master-0 kubenswrapper[33867]: I0219 03:25:21.867319 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.870385 master-0 kubenswrapper[33867]: I0219 03:25:21.870338 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 19 03:25:21.871248 master-0 kubenswrapper[33867]: I0219 03:25:21.871211 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 19 03:25:21.871517 master-0 kubenswrapper[33867]: I0219 03:25:21.871493 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 19 03:25:21.872422 master-0 kubenswrapper[33867]: I0219 03:25:21.872382 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 19 03:25:21.874699 master-0 kubenswrapper[33867]: I0219 03:25:21.874661 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 19 03:25:21.885631 master-0 kubenswrapper[33867]: I0219 03:25:21.885557 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885644 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-server-tls\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885671 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgj2\" (UniqueName: \"kubernetes.io/projected/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-kube-api-access-6xgj2\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885697 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-metrics-server-audit-profiles\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885718 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-audit-log\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885757 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885806 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-client-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.885881 master-0 kubenswrapper[33867]: I0219 03:25:21.885827 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-client-certs\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.892179 master-0 kubenswrapper[33867]: I0219 03:25:21.887012 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.892179 master-0 kubenswrapper[33867]: I0219 03:25:21.888785 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 19 03:25:21.892179 master-0 kubenswrapper[33867]: I0219 03:25:21.891830 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c565b98d-x497s"] Feb 19 03:25:21.892179 master-0 kubenswrapper[33867]: I0219 03:25:21.891984 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 03:25:21.893917 master-0 kubenswrapper[33867]: I0219 03:25:21.893081 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-kpjkc" Feb 19 03:25:21.895329 master-0 kubenswrapper[33867]: I0219 03:25:21.895282 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.896155 master-0 kubenswrapper[33867]: I0219 03:25:21.896088 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 03:25:21.896402 master-0 kubenswrapper[33867]: I0219 03:25:21.896353 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 03:25:21.896468 master-0 kubenswrapper[33867]: I0219 03:25:21.896371 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 03:25:21.902384 master-0 kubenswrapper[33867]: I0219 03:25:21.901680 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-66b5846d67-vlng5"] Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.903477 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.904060 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.904283 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.904437 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.904786 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.904980 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905156 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905511 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905575 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905624 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905750 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905845 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 19 03:25:21.906841 master-0 kubenswrapper[33867]: I0219 03:25:21.905852 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4ccfk8e5ng1ig" Feb 19 03:25:21.909183 master-0 kubenswrapper[33867]: I0219 03:25:21.907876 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c565b98d-x497s"] Feb 19 03:25:21.911433 master-0 kubenswrapper[33867]: I0219 03:25:21.911397 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 03:25:21.911752 master-0 kubenswrapper[33867]: I0219 03:25:21.911722 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:25:21.934987 master-0 kubenswrapper[33867]: I0219 03:25:21.933759 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6df4d685bd-g7b8m"] Feb 19 03:25:21.945281 master-0 kubenswrapper[33867]: I0219 03:25:21.941509 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 03:25:21.986931 master-0 kubenswrapper[33867]: I0219 03:25:21.986873 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987209 master-0 kubenswrapper[33867]: I0219 03:25:21.987154 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987354 master-0 kubenswrapper[33867]: I0219 03:25:21.987289 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.987354 master-0 kubenswrapper[33867]: I0219 03:25:21.987345 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65bck\" (UniqueName: \"kubernetes.io/projected/0cd2ce90-1a60-499b-86d6-7662ce03af65-kube-api-access-65bck\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:21.987428 master-0 kubenswrapper[33867]: I0219 03:25:21.987386 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987428 master-0 kubenswrapper[33867]: I0219 03:25:21.987418 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-federate-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.987488 master-0 kubenswrapper[33867]: I0219 03:25:21.987462 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-server-tls\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.987523 master-0 kubenswrapper[33867]: I0219 03:25:21.987492 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-grpc-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.987555 master-0 kubenswrapper[33867]: I0219 03:25:21.987525 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987588 master-0 kubenswrapper[33867]: I0219 03:25:21.987553 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjjq\" (UniqueName: \"kubernetes.io/projected/848b658f-4754-4f9e-b017-b8655e26679d-kube-api-access-8kjjq\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.987620 master-0 kubenswrapper[33867]: I0219 03:25:21.987601 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgj2\" (UniqueName: \"kubernetes.io/projected/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-kube-api-access-6xgj2\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.987655 master-0 kubenswrapper[33867]: I0219 03:25:21.987641 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987763 master-0 kubenswrapper[33867]: I0219 03:25:21.987672 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987763 master-0 kubenswrapper[33867]: I0219 03:25:21.987713 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987823 master-0 kubenswrapper[33867]: I0219 03:25:21.987766 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6b6t\" (UniqueName: \"kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987823 master-0 kubenswrapper[33867]: I0219 03:25:21.987794 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-metrics-server-audit-profiles\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.987882 master-0 kubenswrapper[33867]: I0219 03:25:21.987820 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-audit-log\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.987882 master-0 kubenswrapper[33867]: I0219 03:25:21.987848 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.987882 master-0 kubenswrapper[33867]: I0219 03:25:21.987867 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.988089 master-0 kubenswrapper[33867]: I0219 03:25:21.988024 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.988357 master-0 kubenswrapper[33867]: I0219 03:25:21.988333 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-audit-log\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.988635 master-0 kubenswrapper[33867]: I0219 03:25:21.988581 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-client-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.988828 master-0 kubenswrapper[33867]: I0219 03:25:21.988695 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.988906 master-0 kubenswrapper[33867]: I0219 03:25:21.988881 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-client-certs\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.989227 master-0 kubenswrapper[33867]: I0219 03:25:21.989184 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0cd2ce90-1a60-499b-86d6-7662ce03af65-serviceca\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:21.989301 master-0 kubenswrapper[33867]: I0219 03:25:21.989249 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-serving-certs-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.989301 master-0 kubenswrapper[33867]: I0219 03:25:21.989205 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-metrics-server-audit-profiles\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.989389 master-0 kubenswrapper[33867]: I0219 03:25:21.989320 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.989491 master-0 kubenswrapper[33867]: I0219 03:25:21.989451 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.989547 master-0 kubenswrapper[33867]: I0219 03:25:21.989504 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf98\" (UniqueName: \"kubernetes.io/projected/943c09ec-a2d2-40df-bbdc-351a30b33d79-kube-api-access-cdf98\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.989547 master-0 kubenswrapper[33867]: I0219 03:25:21.989541 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.989636 master-0 kubenswrapper[33867]: I0219 03:25:21.989568 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cd2ce90-1a60-499b-86d6-7662ce03af65-host\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:21.989636 master-0 kubenswrapper[33867]: I0219 03:25:21.989603 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.989636 master-0 kubenswrapper[33867]: I0219 03:25:21.989629 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989652 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989689 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989728 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-metrics-client-ca\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989833 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989895 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.989958 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/848b658f-4754-4f9e-b017-b8655e26679d-metrics-client-ca\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:21.989766 master-0 kubenswrapper[33867]: I0219 03:25:21.990016 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.001685 master-0 kubenswrapper[33867]: I0219 03:25:21.991189 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.001685 master-0 kubenswrapper[33867]: I0219 03:25:21.991220 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-server-tls\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.001685 master-0 kubenswrapper[33867]: I0219 03:25:21.991538 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-client-ca-bundle\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.001685 master-0 kubenswrapper[33867]: I0219 03:25:21.993158 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-secret-metrics-client-certs\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.006013 master-0 kubenswrapper[33867]: I0219 03:25:22.005959 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgj2\" (UniqueName: \"kubernetes.io/projected/50074e69-cff8-46dc-bd2b-e3dd2f696a9d-kube-api-access-6xgj2\") pod \"metrics-server-66b5846d67-vlng5\" (UID: \"50074e69-cff8-46dc-bd2b-e3dd2f696a9d\") " pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.013364 master-0 kubenswrapper[33867]: I0219 03:25:22.013301 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:25:22.013798 master-0 kubenswrapper[33867]: I0219 03:25:22.013739 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" gracePeriod=30 Feb 19 03:25:22.013988 master-0 kubenswrapper[33867]: I0219 03:25:22.013942 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" containerID="cri-o://d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" gracePeriod=30 Feb 19 03:25:22.014135 master-0 kubenswrapper[33867]: I0219 03:25:22.014009 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" containerID="cri-o://b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" gracePeriod=30 Feb 19 03:25:22.014135 master-0 kubenswrapper[33867]: I0219 03:25:22.014061 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" gracePeriod=30 Feb 19 03:25:22.015931 master-0 kubenswrapper[33867]: I0219 03:25:22.015887 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:25:22.016343 master-0 kubenswrapper[33867]: E0219 03:25:22.016316 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-cert-syncer" Feb 19 03:25:22.016343 master-0 kubenswrapper[33867]: I0219 03:25:22.016339 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-cert-syncer" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: E0219 03:25:22.016350 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: I0219 03:25:22.016357 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: E0219 03:25:22.016372 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: I0219 03:25:22.016378 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: E0219 03:25:22.016395 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: I0219 03:25:22.016401 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: E0219 03:25:22.016411 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-recovery-controller" Feb 19 03:25:22.016473 master-0 kubenswrapper[33867]: I0219 03:25:22.016417 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-recovery-controller" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: I0219 03:25:22.016532 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: I0219 03:25:22.016556 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-cert-syncer" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: I0219 03:25:22.016596 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: I0219 03:25:22.016612 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager-recovery-controller" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: E0219 03:25:22.016731 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.016784 master-0 kubenswrapper[33867]: I0219 03:25:22.016746 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.017009 master-0 kubenswrapper[33867]: I0219 03:25:22.016875 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="kube-controller-manager" Feb 19 03:25:22.017009 master-0 kubenswrapper[33867]: I0219 03:25:22.016898 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eac3d8c63234f2a49e98044c0d4f67" containerName="cluster-policy-controller" Feb 19 03:25:22.091179 master-0 kubenswrapper[33867]: I0219 03:25:22.091097 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.091179 master-0 kubenswrapper[33867]: I0219 03:25:22.091159 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.091683 master-0 kubenswrapper[33867]: I0219 03:25:22.091594 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.091882 master-0 kubenswrapper[33867]: I0219 03:25:22.091842 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.091947 master-0 kubenswrapper[33867]: I0219 03:25:22.091894 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.091947 master-0 kubenswrapper[33867]: I0219 03:25:22.091924 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65bck\" (UniqueName: \"kubernetes.io/projected/0cd2ce90-1a60-499b-86d6-7662ce03af65-kube-api-access-65bck\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.092050 master-0 kubenswrapper[33867]: I0219 03:25:22.091955 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-federate-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.092050 master-0 kubenswrapper[33867]: I0219 03:25:22.091975 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092050 master-0 kubenswrapper[33867]: I0219 03:25:22.091988 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092175 master-0 kubenswrapper[33867]: I0219 03:25:22.092078 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-grpc-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.092175 master-0 kubenswrapper[33867]: I0219 03:25:22.092110 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092175 master-0 kubenswrapper[33867]: I0219 03:25:22.092144 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjjq\" (UniqueName: \"kubernetes.io/projected/848b658f-4754-4f9e-b017-b8655e26679d-kube-api-access-8kjjq\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.092175 master-0 kubenswrapper[33867]: I0219 03:25:22.092174 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092360 master-0 kubenswrapper[33867]: I0219 03:25:22.092196 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092360 master-0 kubenswrapper[33867]: I0219 03:25:22.092225 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092360 master-0 kubenswrapper[33867]: I0219 03:25:22.092294 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6b6t\" (UniqueName: \"kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092360 master-0 kubenswrapper[33867]: I0219 03:25:22.092333 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092521 master-0 kubenswrapper[33867]: I0219 03:25:22.092375 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.092521 master-0 kubenswrapper[33867]: I0219 03:25:22.092413 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.092521 master-0 kubenswrapper[33867]: I0219 03:25:22.092468 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.092643 master-0 kubenswrapper[33867]: I0219 03:25:22.092542 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0cd2ce90-1a60-499b-86d6-7662ce03af65-serviceca\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.092643 master-0 kubenswrapper[33867]: I0219 03:25:22.092569 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-serving-certs-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.092734 master-0 kubenswrapper[33867]: I0219 03:25:22.092648 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.092860 master-0 kubenswrapper[33867]: I0219 03:25:22.092819 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.092921 master-0 kubenswrapper[33867]: I0219 03:25:22.092849 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.093030 master-0 kubenswrapper[33867]: I0219 03:25:22.092926 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.093097 master-0 kubenswrapper[33867]: I0219 03:25:22.093065 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf98\" (UniqueName: \"kubernetes.io/projected/943c09ec-a2d2-40df-bbdc-351a30b33d79-kube-api-access-cdf98\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.093152 master-0 kubenswrapper[33867]: I0219 03:25:22.093122 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.093200 master-0 kubenswrapper[33867]: I0219 03:25:22.093159 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cd2ce90-1a60-499b-86d6-7662ce03af65-host\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.093246 master-0 kubenswrapper[33867]: I0219 03:25:22.093207 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.093246 master-0 kubenswrapper[33867]: I0219 03:25:22.093243 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.093362 master-0 kubenswrapper[33867]: I0219 03:25:22.093292 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.093362 master-0 kubenswrapper[33867]: I0219 03:25:22.093321 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-metrics-client-ca\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.093362 master-0 kubenswrapper[33867]: I0219 03:25:22.093352 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.093489 master-0 kubenswrapper[33867]: I0219 03:25:22.093425 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.093489 master-0 kubenswrapper[33867]: I0219 03:25:22.093435 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.093489 master-0 kubenswrapper[33867]: I0219 03:25:22.093467 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.093603 master-0 kubenswrapper[33867]: E0219 03:25:22.092866 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:22.093701 master-0 kubenswrapper[33867]: E0219 03:25:22.093646 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:22.593616038 +0000 UTC m=+127.890286649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:22.093701 master-0 kubenswrapper[33867]: I0219 03:25:22.093679 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/848b658f-4754-4f9e-b017-b8655e26679d-metrics-client-ca\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.094551 master-0 kubenswrapper[33867]: I0219 03:25:22.094469 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cd2ce90-1a60-499b-86d6-7662ce03af65-host\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.094697 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.094957 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/848b658f-4754-4f9e-b017-b8655e26679d-metrics-client-ca\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.095382 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-serving-certs-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.095629 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-federate-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.096962 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-grpc-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.097300 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0cd2ce90-1a60-499b-86d6-7662ce03af65-serviceca\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.097470 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.097649 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.097940 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.098498 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.098737 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/943c09ec-a2d2-40df-bbdc-351a30b33d79-metrics-client-ca\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.098922 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.099128 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-tls\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.099267 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.099382 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.100303 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.100443 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.103381 master-0 kubenswrapper[33867]: I0219 03:25:22.101272 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.104843 master-0 kubenswrapper[33867]: I0219 03:25:22.104804 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.104920 master-0 kubenswrapper[33867]: I0219 03:25:22.104885 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/848b658f-4754-4f9e-b017-b8655e26679d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.105014 master-0 kubenswrapper[33867]: I0219 03:25:22.104987 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.106915 master-0 kubenswrapper[33867]: I0219 03:25:22.106873 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.109112 master-0 kubenswrapper[33867]: I0219 03:25:22.109057 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6b6t\" (UniqueName: \"kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t\") pod \"oauth-openshift-6f58cc6f64-dchzh\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.110882 master-0 kubenswrapper[33867]: I0219 03:25:22.110823 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjjq\" (UniqueName: \"kubernetes.io/projected/848b658f-4754-4f9e-b017-b8655e26679d-kube-api-access-8kjjq\") pod \"thanos-querier-c565b98d-x497s\" (UID: \"848b658f-4754-4f9e-b017-b8655e26679d\") " pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.113188 master-0 kubenswrapper[33867]: I0219 03:25:22.113154 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65bck\" (UniqueName: \"kubernetes.io/projected/0cd2ce90-1a60-499b-86d6-7662ce03af65-kube-api-access-65bck\") pod \"node-ca-zkwlh\" (UID: \"0cd2ce90-1a60-499b-86d6-7662ce03af65\") " pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.114878 master-0 kubenswrapper[33867]: I0219 03:25:22.114834 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf98\" (UniqueName: \"kubernetes.io/projected/943c09ec-a2d2-40df-bbdc-351a30b33d79-kube-api-access-cdf98\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.170917 master-0 kubenswrapper[33867]: I0219 03:25:22.170863 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:22.195501 master-0 kubenswrapper[33867]: I0219 03:25:22.195300 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.195882 master-0 kubenswrapper[33867]: I0219 03:25:22.195563 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.195882 master-0 kubenswrapper[33867]: I0219 03:25:22.195673 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.195882 master-0 kubenswrapper[33867]: I0219 03:25:22.195752 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.201958 master-0 kubenswrapper[33867]: I0219 03:25:22.201914 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zkwlh" Feb 19 03:25:22.222197 master-0 kubenswrapper[33867]: W0219 03:25:22.222081 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cd2ce90_1a60_499b_86d6_7662ce03af65.slice/crio-98db5e9d3d125073f7bfa3a3b2a08c6ee29d11a3bb02fbdb261e23a42e02f35f WatchSource:0}: Error finding container 98db5e9d3d125073f7bfa3a3b2a08c6ee29d11a3bb02fbdb261e23a42e02f35f: Status 404 returned error can't find the container with id 98db5e9d3d125073f7bfa3a3b2a08c6ee29d11a3bb02fbdb261e23a42e02f35f Feb 19 03:25:22.225425 master-0 kubenswrapper[33867]: I0219 03:25:22.225377 33867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:25:22.247092 master-0 kubenswrapper[33867]: I0219 03:25:22.247035 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:22.269686 master-0 kubenswrapper[33867]: I0219 03:25:22.269619 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:22.319694 master-0 kubenswrapper[33867]: I0219 03:25:22.319638 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:25:22.320191 master-0 kubenswrapper[33867]: I0219 03:25:22.320159 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/1.log" Feb 19 03:25:22.322595 master-0 kubenswrapper[33867]: I0219 03:25:22.322557 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager-cert-syncer/0.log" Feb 19 03:25:22.323218 master-0 kubenswrapper[33867]: I0219 03:25:22.323192 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:22.331237 master-0 kubenswrapper[33867]: I0219 03:25:22.331189 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="50eac3d8c63234f2a49e98044c0d4f67" podUID="54d93c932fb6b580283b25f4adc52bd3" Feb 19 03:25:22.398881 master-0 kubenswrapper[33867]: I0219 03:25:22.398781 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") pod \"50eac3d8c63234f2a49e98044c0d4f67\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " Feb 19 03:25:22.399856 master-0 kubenswrapper[33867]: I0219 03:25:22.398935 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "50eac3d8c63234f2a49e98044c0d4f67" (UID: "50eac3d8c63234f2a49e98044c0d4f67"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:22.399856 master-0 kubenswrapper[33867]: I0219 03:25:22.399047 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") pod \"50eac3d8c63234f2a49e98044c0d4f67\" (UID: \"50eac3d8c63234f2a49e98044c0d4f67\") " Feb 19 03:25:22.399856 master-0 kubenswrapper[33867]: I0219 03:25:22.399073 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "50eac3d8c63234f2a49e98044c0d4f67" (UID: "50eac3d8c63234f2a49e98044c0d4f67"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:22.400510 master-0 kubenswrapper[33867]: I0219 03:25:22.400190 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:22.400510 master-0 kubenswrapper[33867]: I0219 03:25:22.400214 33867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/50eac3d8c63234f2a49e98044c0d4f67-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:22.591873 master-0 kubenswrapper[33867]: I0219 03:25:22.591782 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-66b5846d67-vlng5"] Feb 19 03:25:22.597617 master-0 kubenswrapper[33867]: W0219 03:25:22.597513 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50074e69_cff8_46dc_bd2b_e3dd2f696a9d.slice/crio-67f2ef5dd728897e06622bed96d320703562711da803adbc7d49e648bcc46faf WatchSource:0}: Error finding container 67f2ef5dd728897e06622bed96d320703562711da803adbc7d49e648bcc46faf: Status 404 returned error can't find the container with id 67f2ef5dd728897e06622bed96d320703562711da803adbc7d49e648bcc46faf Feb 19 03:25:22.603375 master-0 kubenswrapper[33867]: I0219 03:25:22.603319 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:22.603564 master-0 kubenswrapper[33867]: E0219 03:25:22.603524 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:22.603714 master-0 kubenswrapper[33867]: E0219 03:25:22.603692 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:23.60361128 +0000 UTC m=+128.900281891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:22.728545 master-0 kubenswrapper[33867]: I0219 03:25:22.727892 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:25:22.798536 master-0 kubenswrapper[33867]: I0219 03:25:22.798458 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c565b98d-x497s"] Feb 19 03:25:22.969894 master-0 kubenswrapper[33867]: I0219 03:25:22.969817 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eac3d8c63234f2a49e98044c0d4f67" path="/var/lib/kubelet/pods/50eac3d8c63234f2a49e98044c0d4f67/volumes" Feb 19 03:25:23.075358 master-0 kubenswrapper[33867]: I0219 03:25:23.074964 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"1ba1283abfdbf53757e8d134af7c8863fbc75f2d8f92e31a9dd419c253797248"} Feb 19 03:25:23.076349 master-0 kubenswrapper[33867]: I0219 03:25:23.076319 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zkwlh" event={"ID":"0cd2ce90-1a60-499b-86d6-7662ce03af65","Type":"ContainerStarted","Data":"98db5e9d3d125073f7bfa3a3b2a08c6ee29d11a3bb02fbdb261e23a42e02f35f"} Feb 19 03:25:23.079329 master-0 kubenswrapper[33867]: I0219 03:25:23.077472 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" event={"ID":"15a3667e-608f-493b-8315-b1358b65b462","Type":"ContainerStarted","Data":"5acf693df00afe95996b30a5b0da4d673657acd415a117cc3d939228c657ac05"} Feb 19 03:25:23.080057 master-0 kubenswrapper[33867]: I0219 03:25:23.079443 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/cluster-policy-controller/3.log" Feb 19 03:25:23.080057 master-0 kubenswrapper[33867]: I0219 03:25:23.079837 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager/1.log" Feb 19 03:25:23.080598 master-0 kubenswrapper[33867]: I0219 03:25:23.080573 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_50eac3d8c63234f2a49e98044c0d4f67/kube-controller-manager-cert-syncer/0.log" Feb 19 03:25:23.080667 master-0 kubenswrapper[33867]: I0219 03:25:23.080613 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" exitCode=0 Feb 19 03:25:23.080667 master-0 kubenswrapper[33867]: I0219 03:25:23.080634 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" exitCode=0 Feb 19 03:25:23.080667 master-0 kubenswrapper[33867]: I0219 03:25:23.080643 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" exitCode=0 Feb 19 03:25:23.080667 master-0 kubenswrapper[33867]: I0219 03:25:23.080651 33867 generic.go:334] "Generic (PLEG): container finished" podID="50eac3d8c63234f2a49e98044c0d4f67" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" exitCode=2 Feb 19 03:25:23.080804 master-0 kubenswrapper[33867]: I0219 03:25:23.080712 33867 scope.go:117] "RemoveContainer" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.080876 master-0 kubenswrapper[33867]: I0219 03:25:23.080863 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:23.084981 master-0 kubenswrapper[33867]: I0219 03:25:23.084194 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" event={"ID":"50074e69-cff8-46dc-bd2b-e3dd2f696a9d","Type":"ContainerStarted","Data":"c5c8d29c96b5691f2b9e3cd67bf1861e83f58e100536b50904212a06f999517e"} Feb 19 03:25:23.084981 master-0 kubenswrapper[33867]: I0219 03:25:23.084281 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" event={"ID":"50074e69-cff8-46dc-bd2b-e3dd2f696a9d","Type":"ContainerStarted","Data":"67f2ef5dd728897e06622bed96d320703562711da803adbc7d49e648bcc46faf"} Feb 19 03:25:23.085548 master-0 kubenswrapper[33867]: I0219 03:25:23.085505 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="50eac3d8c63234f2a49e98044c0d4f67" podUID="54d93c932fb6b580283b25f4adc52bd3" Feb 19 03:25:23.087787 master-0 kubenswrapper[33867]: I0219 03:25:23.087752 33867 generic.go:334] "Generic (PLEG): container finished" podID="c569efe9-6db4-4082-8be0-4391ab4a88a8" containerID="e5774f3d03332c20a101a37ad4ef6d8dc75f134cf3324eae2691a83c47039693" exitCode=0 Feb 19 03:25:23.087787 master-0 kubenswrapper[33867]: I0219 03:25:23.087783 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c569efe9-6db4-4082-8be0-4391ab4a88a8","Type":"ContainerDied","Data":"e5774f3d03332c20a101a37ad4ef6d8dc75f134cf3324eae2691a83c47039693"} Feb 19 03:25:23.117306 master-0 kubenswrapper[33867]: I0219 03:25:23.117104 33867 scope.go:117] "RemoveContainer" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.141784 master-0 kubenswrapper[33867]: I0219 03:25:23.141727 33867 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.165855 master-0 kubenswrapper[33867]: I0219 03:25:23.165795 33867 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.201667 master-0 kubenswrapper[33867]: I0219 03:25:23.201595 33867 scope.go:117] "RemoveContainer" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.219692 master-0 kubenswrapper[33867]: I0219 03:25:23.218832 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" podStartSLOduration=27.218809087 podStartE2EDuration="27.218809087s" podCreationTimestamp="2026-02-19 03:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:25:23.215220376 +0000 UTC m=+128.511891017" watchObservedRunningTime="2026-02-19 03:25:23.218809087 +0000 UTC m=+128.515479708" Feb 19 03:25:23.229924 master-0 kubenswrapper[33867]: I0219 03:25:23.229421 33867 scope.go:117] "RemoveContainer" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.249607 master-0 kubenswrapper[33867]: I0219 03:25:23.249037 33867 scope.go:117] "RemoveContainer" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.249607 master-0 kubenswrapper[33867]: I0219 03:25:23.249414 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="50eac3d8c63234f2a49e98044c0d4f67" podUID="54d93c932fb6b580283b25f4adc52bd3" Feb 19 03:25:23.250014 master-0 kubenswrapper[33867]: E0219 03:25:23.249778 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": container with ID starting with d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b not found: ID does not exist" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.250014 master-0 kubenswrapper[33867]: I0219 03:25:23.249819 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b"} err="failed to get container status \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": rpc error: code = NotFound desc = could not find container \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": container with ID starting with d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b not found: ID does not exist" Feb 19 03:25:23.250014 master-0 kubenswrapper[33867]: I0219 03:25:23.249852 33867 scope.go:117] "RemoveContainer" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.250376 master-0 kubenswrapper[33867]: E0219 03:25:23.250147 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": container with ID starting with b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa not found: ID does not exist" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.250376 master-0 kubenswrapper[33867]: I0219 03:25:23.250185 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} err="failed to get container status \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": rpc error: code = NotFound desc = could not find container \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": container with ID starting with b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa not found: ID does not exist" Feb 19 03:25:23.250376 master-0 kubenswrapper[33867]: I0219 03:25:23.250198 33867 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.250862 master-0 kubenswrapper[33867]: E0219 03:25:23.250492 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": container with ID starting with 34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f not found: ID does not exist" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.250862 master-0 kubenswrapper[33867]: I0219 03:25:23.250528 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} err="failed to get container status \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": rpc error: code = NotFound desc = could not find container \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": container with ID starting with 34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f not found: ID does not exist" Feb 19 03:25:23.250862 master-0 kubenswrapper[33867]: I0219 03:25:23.250551 33867 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.251126 master-0 kubenswrapper[33867]: E0219 03:25:23.250923 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": container with ID starting with b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3 not found: ID does not exist" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.251126 master-0 kubenswrapper[33867]: I0219 03:25:23.250940 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} err="failed to get container status \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": rpc error: code = NotFound desc = could not find container \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": container with ID starting with b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3 not found: ID does not exist" Feb 19 03:25:23.251126 master-0 kubenswrapper[33867]: I0219 03:25:23.250953 33867 scope.go:117] "RemoveContainer" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.252506 master-0 kubenswrapper[33867]: E0219 03:25:23.252397 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": container with ID starting with ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b not found: ID does not exist" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.252506 master-0 kubenswrapper[33867]: I0219 03:25:23.252427 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} err="failed to get container status \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": rpc error: code = NotFound desc = could not find container \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": container with ID starting with ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b not found: ID does not exist" Feb 19 03:25:23.252506 master-0 kubenswrapper[33867]: I0219 03:25:23.252448 33867 scope.go:117] "RemoveContainer" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.257587 master-0 kubenswrapper[33867]: E0219 03:25:23.257225 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": container with ID starting with 63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706 not found: ID does not exist" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.257587 master-0 kubenswrapper[33867]: I0219 03:25:23.257279 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} err="failed to get container status \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": rpc error: code = NotFound desc = could not find container \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": container with ID starting with 63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706 not found: ID does not exist" Feb 19 03:25:23.257587 master-0 kubenswrapper[33867]: I0219 03:25:23.257492 33867 scope.go:117] "RemoveContainer" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.257905 master-0 kubenswrapper[33867]: I0219 03:25:23.257870 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b"} err="failed to get container status \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": rpc error: code = NotFound desc = could not find container \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": container with ID starting with d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b not found: ID does not exist" Feb 19 03:25:23.257905 master-0 kubenswrapper[33867]: I0219 03:25:23.257894 33867 scope.go:117] "RemoveContainer" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.258682 master-0 kubenswrapper[33867]: I0219 03:25:23.258591 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} err="failed to get container status \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": rpc error: code = NotFound desc = could not find container \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": container with ID starting with b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa not found: ID does not exist" Feb 19 03:25:23.258682 master-0 kubenswrapper[33867]: I0219 03:25:23.258615 33867 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.258937 master-0 kubenswrapper[33867]: I0219 03:25:23.258871 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} err="failed to get container status \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": rpc error: code = NotFound desc = could not find container \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": container with ID starting with 34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f not found: ID does not exist" Feb 19 03:25:23.258937 master-0 kubenswrapper[33867]: I0219 03:25:23.258892 33867 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.259192 master-0 kubenswrapper[33867]: I0219 03:25:23.259121 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} err="failed to get container status \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": rpc error: code = NotFound desc = could not find container \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": container with ID starting with b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3 not found: ID does not exist" Feb 19 03:25:23.259192 master-0 kubenswrapper[33867]: I0219 03:25:23.259142 33867 scope.go:117] "RemoveContainer" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.259488 master-0 kubenswrapper[33867]: I0219 03:25:23.259457 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} err="failed to get container status \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": rpc error: code = NotFound desc = could not find container \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": container with ID starting with ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b not found: ID does not exist" Feb 19 03:25:23.259554 master-0 kubenswrapper[33867]: I0219 03:25:23.259488 33867 scope.go:117] "RemoveContainer" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.264757 master-0 kubenswrapper[33867]: I0219 03:25:23.264462 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} err="failed to get container status \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": rpc error: code = NotFound desc = could not find container \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": container with ID starting with 63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706 not found: ID does not exist" Feb 19 03:25:23.264757 master-0 kubenswrapper[33867]: I0219 03:25:23.264750 33867 scope.go:117] "RemoveContainer" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.265163 master-0 kubenswrapper[33867]: I0219 03:25:23.265121 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b"} err="failed to get container status \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": rpc error: code = NotFound desc = could not find container \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": container with ID starting with d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b not found: ID does not exist" Feb 19 03:25:23.265163 master-0 kubenswrapper[33867]: I0219 03:25:23.265155 33867 scope.go:117] "RemoveContainer" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.266162 master-0 kubenswrapper[33867]: I0219 03:25:23.265867 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} err="failed to get container status \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": rpc error: code = NotFound desc = could not find container \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": container with ID starting with b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa not found: ID does not exist" Feb 19 03:25:23.266162 master-0 kubenswrapper[33867]: I0219 03:25:23.266155 33867 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.267433 master-0 kubenswrapper[33867]: I0219 03:25:23.267390 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} err="failed to get container status \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": rpc error: code = NotFound desc = could not find container \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": container with ID starting with 34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f not found: ID does not exist" Feb 19 03:25:23.267433 master-0 kubenswrapper[33867]: I0219 03:25:23.267424 33867 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.267747 master-0 kubenswrapper[33867]: I0219 03:25:23.267714 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} err="failed to get container status \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": rpc error: code = NotFound desc = could not find container \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": container with ID starting with b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3 not found: ID does not exist" Feb 19 03:25:23.267747 master-0 kubenswrapper[33867]: I0219 03:25:23.267735 33867 scope.go:117] "RemoveContainer" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.267988 master-0 kubenswrapper[33867]: I0219 03:25:23.267954 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} err="failed to get container status \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": rpc error: code = NotFound desc = could not find container \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": container with ID starting with ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b not found: ID does not exist" Feb 19 03:25:23.267988 master-0 kubenswrapper[33867]: I0219 03:25:23.267977 33867 scope.go:117] "RemoveContainer" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.268273 master-0 kubenswrapper[33867]: I0219 03:25:23.268225 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} err="failed to get container status \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": rpc error: code = NotFound desc = could not find container \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": container with ID starting with 63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706 not found: ID does not exist" Feb 19 03:25:23.268273 master-0 kubenswrapper[33867]: I0219 03:25:23.268246 33867 scope.go:117] "RemoveContainer" containerID="d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b" Feb 19 03:25:23.268534 master-0 kubenswrapper[33867]: I0219 03:25:23.268503 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b"} err="failed to get container status \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": rpc error: code = NotFound desc = could not find container \"d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b\": container with ID starting with d4894ec7d078e7bb573eec1c2197197342f8d1deb126c8fd6959c38b8af75b6b not found: ID does not exist" Feb 19 03:25:23.268534 master-0 kubenswrapper[33867]: I0219 03:25:23.268525 33867 scope.go:117] "RemoveContainer" containerID="b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa" Feb 19 03:25:23.268796 master-0 kubenswrapper[33867]: I0219 03:25:23.268768 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa"} err="failed to get container status \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": rpc error: code = NotFound desc = could not find container \"b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa\": container with ID starting with b1d08c265ef3ea8c0da0ae79cb1f3e899fb5cdf4e75b3510611083d86cc91faa not found: ID does not exist" Feb 19 03:25:23.268856 master-0 kubenswrapper[33867]: I0219 03:25:23.268795 33867 scope.go:117] "RemoveContainer" containerID="34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f" Feb 19 03:25:23.272232 master-0 kubenswrapper[33867]: I0219 03:25:23.269033 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f"} err="failed to get container status \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": rpc error: code = NotFound desc = could not find container \"34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f\": container with ID starting with 34f7f5e53058f7e732522052eae6c1903c824dad3125c8d1bfa72803d7c4a19f not found: ID does not exist" Feb 19 03:25:23.272232 master-0 kubenswrapper[33867]: I0219 03:25:23.269058 33867 scope.go:117] "RemoveContainer" containerID="b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3" Feb 19 03:25:23.272232 master-0 kubenswrapper[33867]: I0219 03:25:23.269289 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3"} err="failed to get container status \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": rpc error: code = NotFound desc = could not find container \"b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3\": container with ID starting with b8a32d4903e67eb2d39d7b3a77ab2a4871b6fbcdf9aaa67b022c47e00dedf4f3 not found: ID does not exist" Feb 19 03:25:23.272232 master-0 kubenswrapper[33867]: I0219 03:25:23.269319 33867 scope.go:117] "RemoveContainer" containerID="ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b" Feb 19 03:25:23.273765 master-0 kubenswrapper[33867]: I0219 03:25:23.273729 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b"} err="failed to get container status \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": rpc error: code = NotFound desc = could not find container \"ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b\": container with ID starting with ebe87a8ea3a0094456a54c8f29c592890d867b5132c03acd48c70d050fcff28b not found: ID does not exist" Feb 19 03:25:23.273838 master-0 kubenswrapper[33867]: I0219 03:25:23.273769 33867 scope.go:117] "RemoveContainer" containerID="63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706" Feb 19 03:25:23.276244 master-0 kubenswrapper[33867]: I0219 03:25:23.274559 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706"} err="failed to get container status \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": rpc error: code = NotFound desc = could not find container \"63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706\": container with ID starting with 63443aa0381ef90f859c47383b1aed8501527b957991cd35183eee7f06eca706 not found: ID does not exist" Feb 19 03:25:23.630036 master-0 kubenswrapper[33867]: I0219 03:25:23.629917 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:23.630482 master-0 kubenswrapper[33867]: E0219 03:25:23.630238 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:23.630482 master-0 kubenswrapper[33867]: E0219 03:25:23.630425 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:25.630395296 +0000 UTC m=+130.927065907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:24.666192 master-0 kubenswrapper[33867]: I0219 03:25:24.666147 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:25:24.748702 master-0 kubenswrapper[33867]: I0219 03:25:24.748651 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir\") pod \"c569efe9-6db4-4082-8be0-4391ab4a88a8\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " Feb 19 03:25:24.748960 master-0 kubenswrapper[33867]: I0219 03:25:24.748754 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock\") pod \"c569efe9-6db4-4082-8be0-4391ab4a88a8\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " Feb 19 03:25:24.748960 master-0 kubenswrapper[33867]: I0219 03:25:24.748784 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c569efe9-6db4-4082-8be0-4391ab4a88a8" (UID: "c569efe9-6db4-4082-8be0-4391ab4a88a8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:24.748960 master-0 kubenswrapper[33867]: I0219 03:25:24.748828 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock" (OuterVolumeSpecName: "var-lock") pod "c569efe9-6db4-4082-8be0-4391ab4a88a8" (UID: "c569efe9-6db4-4082-8be0-4391ab4a88a8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:24.748960 master-0 kubenswrapper[33867]: I0219 03:25:24.748889 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access\") pod \"c569efe9-6db4-4082-8be0-4391ab4a88a8\" (UID: \"c569efe9-6db4-4082-8be0-4391ab4a88a8\") " Feb 19 03:25:24.749200 master-0 kubenswrapper[33867]: I0219 03:25:24.749181 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:24.749200 master-0 kubenswrapper[33867]: I0219 03:25:24.749197 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c569efe9-6db4-4082-8be0-4391ab4a88a8-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:24.752300 master-0 kubenswrapper[33867]: I0219 03:25:24.752275 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c569efe9-6db4-4082-8be0-4391ab4a88a8" (UID: "c569efe9-6db4-4082-8be0-4391ab4a88a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:25:24.854319 master-0 kubenswrapper[33867]: I0219 03:25:24.854215 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c569efe9-6db4-4082-8be0-4391ab4a88a8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:25.116828 master-0 kubenswrapper[33867]: I0219 03:25:25.116754 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c569efe9-6db4-4082-8be0-4391ab4a88a8","Type":"ContainerDied","Data":"dc0602e36f88751d57eb01d5f0acbd191ef2cd752fe75323b3efb8eb76fabffb"} Feb 19 03:25:25.116828 master-0 kubenswrapper[33867]: I0219 03:25:25.116808 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0602e36f88751d57eb01d5f0acbd191ef2cd752fe75323b3efb8eb76fabffb" Feb 19 03:25:25.116828 master-0 kubenswrapper[33867]: I0219 03:25:25.116815 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 19 03:25:25.673914 master-0 kubenswrapper[33867]: I0219 03:25:25.673860 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:25.674490 master-0 kubenswrapper[33867]: E0219 03:25:25.674070 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:25.674490 master-0 kubenswrapper[33867]: E0219 03:25:25.674164 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:29.674141739 +0000 UTC m=+134.970812350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:26.127106 master-0 kubenswrapper[33867]: I0219 03:25:26.127034 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"ac7248ca13af80ddbf446a62633b28c16cdbcc59b99fd5585dc4bd220e977073"} Feb 19 03:25:27.149671 master-0 kubenswrapper[33867]: I0219 03:25:27.149587 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"de6ea6c0d68473788542775bc48842d953488053c8ec41789914fdda36fc9a20"} Feb 19 03:25:27.149671 master-0 kubenswrapper[33867]: I0219 03:25:27.149664 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"a2dbeb143bbbdbc1922b8cd77bde97a4d2c97ec3f957c5f50bed2c9315cd55c4"} Feb 19 03:25:27.153231 master-0 kubenswrapper[33867]: I0219 03:25:27.153184 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zkwlh" event={"ID":"0cd2ce90-1a60-499b-86d6-7662ce03af65","Type":"ContainerStarted","Data":"b909d319e0c4308244bbed5a206f221971f2c991c420413d90585ceb569de1da"} Feb 19 03:25:27.156685 master-0 kubenswrapper[33867]: I0219 03:25:27.156498 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" event={"ID":"15a3667e-608f-493b-8315-b1358b65b462","Type":"ContainerStarted","Data":"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117"} Feb 19 03:25:27.159289 master-0 kubenswrapper[33867]: I0219 03:25:27.159212 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:27.163749 master-0 kubenswrapper[33867]: I0219 03:25:27.163668 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:25:27.175529 master-0 kubenswrapper[33867]: I0219 03:25:27.175411 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zkwlh" podStartSLOduration=30.51807054 podStartE2EDuration="34.175382886s" podCreationTimestamp="2026-02-19 03:24:53 +0000 UTC" firstStartedPulling="2026-02-19 03:25:22.225250617 +0000 UTC m=+127.521921228" lastFinishedPulling="2026-02-19 03:25:25.882562963 +0000 UTC m=+131.179233574" observedRunningTime="2026-02-19 03:25:27.172545146 +0000 UTC m=+132.469215767" watchObservedRunningTime="2026-02-19 03:25:27.175382886 +0000 UTC m=+132.472053497" Feb 19 03:25:27.195664 master-0 kubenswrapper[33867]: I0219 03:25:27.195598 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" podStartSLOduration=8.09005645 podStartE2EDuration="11.195574845s" podCreationTimestamp="2026-02-19 03:25:16 +0000 UTC" firstStartedPulling="2026-02-19 03:25:22.781570425 +0000 UTC m=+128.078241036" lastFinishedPulling="2026-02-19 03:25:25.88708882 +0000 UTC m=+131.183759431" observedRunningTime="2026-02-19 03:25:27.193735913 +0000 UTC m=+132.490406524" watchObservedRunningTime="2026-02-19 03:25:27.195574845 +0000 UTC m=+132.492245476" Feb 19 03:25:27.850093 master-0 kubenswrapper[33867]: I0219 03:25:27.850031 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:25:27.850607 master-0 kubenswrapper[33867]: I0219 03:25:27.850534 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" containerID="cri-o://d0fbcab1791c1fa93d0b8382e393526b12e53a1efcdb373eae2fce501c101408" gracePeriod=30 Feb 19 03:25:27.850712 master-0 kubenswrapper[33867]: I0219 03:25:27.850631 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" containerID="cri-o://0cf7d392da6a301b93f30bcc03748c612e502b9e965838935f8e427396fbdf21" gracePeriod=30 Feb 19 03:25:27.850797 master-0 kubenswrapper[33867]: I0219 03:25:27.850693 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" containerID="cri-o://2d484b07e94495906a9ef1c8f980fb107c93c95a40a52c0019224db82b51fc4d" gracePeriod=30 Feb 19 03:25:27.851677 master-0 kubenswrapper[33867]: I0219 03:25:27.851506 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:25:27.851961 master-0 kubenswrapper[33867]: E0219 03:25:27.851918 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 19 03:25:27.851961 master-0 kubenswrapper[33867]: I0219 03:25:27.851948 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: E0219 03:25:27.851972 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: I0219 03:25:27.851982 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: E0219 03:25:27.852006 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: I0219 03:25:27.852017 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: E0219 03:25:27.852033 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c569efe9-6db4-4082-8be0-4391ab4a88a8" containerName="installer" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: I0219 03:25:27.852041 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c569efe9-6db4-4082-8be0-4391ab4a88a8" containerName="installer" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: E0219 03:25:27.852063 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852086 master-0 kubenswrapper[33867]: I0219 03:25:27.852070 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: E0219 03:25:27.852103 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852114 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852305 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852336 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852355 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c569efe9-6db4-4082-8be0-4391ab4a88a8" containerName="installer" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852372 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852390 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 19 03:25:27.852439 master-0 kubenswrapper[33867]: I0219 03:25:27.852409 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 19 03:25:27.910855 master-0 kubenswrapper[33867]: I0219 03:25:27.910750 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:27.911062 master-0 kubenswrapper[33867]: I0219 03:25:27.910911 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.013067 master-0 kubenswrapper[33867]: I0219 03:25:28.012993 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.013067 master-0 kubenswrapper[33867]: I0219 03:25:28.013067 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.013547 master-0 kubenswrapper[33867]: I0219 03:25:28.013506 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.013788 master-0 kubenswrapper[33867]: I0219 03:25:28.013753 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.032775 master-0 kubenswrapper[33867]: I0219 03:25:28.032716 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler-cert-syncer/0.log" Feb 19 03:25:28.033343 master-0 kubenswrapper[33867]: I0219 03:25:28.033312 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 19 03:25:28.034099 master-0 kubenswrapper[33867]: I0219 03:25:28.034059 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.037649 master-0 kubenswrapper[33867]: I0219 03:25:28.037603 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.113762 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"56ff46cdb00d28519af7c0cdc9ea8d11\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.113869 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "56ff46cdb00d28519af7c0cdc9ea8d11" (UID: "56ff46cdb00d28519af7c0cdc9ea8d11"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.113978 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"56ff46cdb00d28519af7c0cdc9ea8d11\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.114054 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "56ff46cdb00d28519af7c0cdc9ea8d11" (UID: "56ff46cdb00d28519af7c0cdc9ea8d11"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.114506 33867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:28.117966 master-0 kubenswrapper[33867]: I0219 03:25:28.114524 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:28.181924 master-0 kubenswrapper[33867]: I0219 03:25:28.181751 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler-cert-syncer/0.log" Feb 19 03:25:28.182804 master-0 kubenswrapper[33867]: I0219 03:25:28.182439 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 19 03:25:28.183023 master-0 kubenswrapper[33867]: I0219 03:25:28.182980 33867 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="2d484b07e94495906a9ef1c8f980fb107c93c95a40a52c0019224db82b51fc4d" exitCode=0 Feb 19 03:25:28.183023 master-0 kubenswrapper[33867]: I0219 03:25:28.183017 33867 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="0cf7d392da6a301b93f30bcc03748c612e502b9e965838935f8e427396fbdf21" exitCode=0 Feb 19 03:25:28.183143 master-0 kubenswrapper[33867]: I0219 03:25:28.183031 33867 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="d0fbcab1791c1fa93d0b8382e393526b12e53a1efcdb373eae2fce501c101408" exitCode=2 Feb 19 03:25:28.183143 master-0 kubenswrapper[33867]: I0219 03:25:28.183072 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff0199536e5f54a5bdaa7868fb5ea7e61ffa31ff819b0546dd411cddd134f43" Feb 19 03:25:28.183143 master-0 kubenswrapper[33867]: I0219 03:25:28.183090 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:28.183143 master-0 kubenswrapper[33867]: I0219 03:25:28.183111 33867 scope.go:117] "RemoveContainer" containerID="ebeab0f2e4292264d96a63c87d2d2fdbec7d9f9a916fb23b3f013edea6328327" Feb 19 03:25:28.185174 master-0 kubenswrapper[33867]: I0219 03:25:28.185122 33867 generic.go:334] "Generic (PLEG): container finished" podID="1ba0c261-497c-4236-8f14-98ce5c16af59" containerID="26b06eab1f94dd6261f000583e030e306cfda4b8f6001932aa21638d9dddc9ae" exitCode=0 Feb 19 03:25:28.185266 master-0 kubenswrapper[33867]: I0219 03:25:28.185170 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"1ba0c261-497c-4236-8f14-98ce5c16af59","Type":"ContainerDied","Data":"26b06eab1f94dd6261f000583e030e306cfda4b8f6001932aa21638d9dddc9ae"} Feb 19 03:25:28.186002 master-0 kubenswrapper[33867]: I0219 03:25:28.185961 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 19 03:25:28.200686 master-0 kubenswrapper[33867]: I0219 03:25:28.200581 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"9830c5e3aea3bd58a6d7570e2bfdcd4c6fe1e68cde31389c016faeea4ec915aa"} Feb 19 03:25:28.200686 master-0 kubenswrapper[33867]: I0219 03:25:28.200681 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"6f25a76f63412041cfee7f154a039b9b01020d44c73cfff1cedba72fbece407a"} Feb 19 03:25:28.200955 master-0 kubenswrapper[33867]: I0219 03:25:28.200743 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" event={"ID":"848b658f-4754-4f9e-b017-b8655e26679d","Type":"ContainerStarted","Data":"6825ac34ae7c81af17307f3c468457f9e6f3b437015069c48c517a4bf455a839"} Feb 19 03:25:28.200955 master-0 kubenswrapper[33867]: I0219 03:25:28.200772 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:28.240882 master-0 kubenswrapper[33867]: I0219 03:25:28.240812 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 19 03:25:28.243639 master-0 kubenswrapper[33867]: I0219 03:25:28.243538 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" podStartSLOduration=29.673808819 podStartE2EDuration="34.243498535s" podCreationTimestamp="2026-02-19 03:24:54 +0000 UTC" firstStartedPulling="2026-02-19 03:25:22.804549003 +0000 UTC m=+128.101219614" lastFinishedPulling="2026-02-19 03:25:27.374238719 +0000 UTC m=+132.670909330" observedRunningTime="2026-02-19 03:25:28.238206206 +0000 UTC m=+133.534876857" watchObservedRunningTime="2026-02-19 03:25:28.243498535 +0000 UTC m=+133.540169146" Feb 19 03:25:28.967494 master-0 kubenswrapper[33867]: I0219 03:25:28.967439 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" path="/var/lib/kubelet/pods/56ff46cdb00d28519af7c0cdc9ea8d11/volumes" Feb 19 03:25:29.203187 master-0 kubenswrapper[33867]: I0219 03:25:29.203085 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler-cert-syncer/0.log" Feb 19 03:25:29.517524 master-0 kubenswrapper[33867]: I0219 03:25:29.517462 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:25:29.647620 master-0 kubenswrapper[33867]: I0219 03:25:29.647492 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") pod \"1ba0c261-497c-4236-8f14-98ce5c16af59\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " Feb 19 03:25:29.647856 master-0 kubenswrapper[33867]: I0219 03:25:29.647750 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ba0c261-497c-4236-8f14-98ce5c16af59" (UID: "1ba0c261-497c-4236-8f14-98ce5c16af59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:29.648013 master-0 kubenswrapper[33867]: I0219 03:25:29.647912 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") pod \"1ba0c261-497c-4236-8f14-98ce5c16af59\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " Feb 19 03:25:29.648143 master-0 kubenswrapper[33867]: I0219 03:25:29.648083 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") pod \"1ba0c261-497c-4236-8f14-98ce5c16af59\" (UID: \"1ba0c261-497c-4236-8f14-98ce5c16af59\") " Feb 19 03:25:29.648288 master-0 kubenswrapper[33867]: I0219 03:25:29.648201 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock" (OuterVolumeSpecName: "var-lock") pod "1ba0c261-497c-4236-8f14-98ce5c16af59" (UID: "1ba0c261-497c-4236-8f14-98ce5c16af59"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:25:29.648877 master-0 kubenswrapper[33867]: I0219 03:25:29.648816 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:29.648939 master-0 kubenswrapper[33867]: I0219 03:25:29.648874 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ba0c261-497c-4236-8f14-98ce5c16af59-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:29.652130 master-0 kubenswrapper[33867]: I0219 03:25:29.652042 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ba0c261-497c-4236-8f14-98ce5c16af59" (UID: "1ba0c261-497c-4236-8f14-98ce5c16af59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:25:29.750491 master-0 kubenswrapper[33867]: I0219 03:25:29.750392 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:29.750824 master-0 kubenswrapper[33867]: E0219 03:25:29.750582 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:29.750824 master-0 kubenswrapper[33867]: E0219 03:25:29.750647 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:37.750628607 +0000 UTC m=+143.047299218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:29.750824 master-0 kubenswrapper[33867]: I0219 03:25:29.750670 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ba0c261-497c-4236-8f14-98ce5c16af59-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:25:30.214286 master-0 kubenswrapper[33867]: I0219 03:25:30.214201 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Feb 19 03:25:30.214286 master-0 kubenswrapper[33867]: I0219 03:25:30.214192 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"1ba0c261-497c-4236-8f14-98ce5c16af59","Type":"ContainerDied","Data":"d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052"} Feb 19 03:25:30.214286 master-0 kubenswrapper[33867]: I0219 03:25:30.214286 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2f6bebf53bdfc6ad3d2abeb94830556bc84518d0ea9724bdf6282a713b33052" Feb 19 03:25:32.281147 master-0 kubenswrapper[33867]: I0219 03:25:32.281062 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c565b98d-x497s" Feb 19 03:25:34.954907 master-0 kubenswrapper[33867]: I0219 03:25:34.954799 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:34.982065 master-0 kubenswrapper[33867]: I0219 03:25:34.981981 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fadcc48d-958e-41b4-b73d-19321ecb8bb9" Feb 19 03:25:34.982065 master-0 kubenswrapper[33867]: I0219 03:25:34.982039 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fadcc48d-958e-41b4-b73d-19321ecb8bb9" Feb 19 03:25:35.023157 master-0 kubenswrapper[33867]: I0219 03:25:35.023044 33867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:35.024440 master-0 kubenswrapper[33867]: I0219 03:25:35.024408 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:25:35.026774 master-0 kubenswrapper[33867]: I0219 03:25:35.026686 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:25:35.035830 master-0 kubenswrapper[33867]: I0219 03:25:35.035771 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:35.037939 master-0 kubenswrapper[33867]: I0219 03:25:35.037880 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:25:35.061596 master-0 kubenswrapper[33867]: W0219 03:25:35.061510 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d93c932fb6b580283b25f4adc52bd3.slice/crio-3bff094ebb7f12391127b312dadf80a6b3c7978c494062056f5c36d42b113185 WatchSource:0}: Error finding container 3bff094ebb7f12391127b312dadf80a6b3c7978c494062056f5c36d42b113185: Status 404 returned error can't find the container with id 3bff094ebb7f12391127b312dadf80a6b3c7978c494062056f5c36d42b113185 Feb 19 03:25:35.427599 master-0 kubenswrapper[33867]: I0219 03:25:35.427539 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"3bff094ebb7f12391127b312dadf80a6b3c7978c494062056f5c36d42b113185"} Feb 19 03:25:36.440800 master-0 kubenswrapper[33867]: I0219 03:25:36.440700 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"2a6378171c9d7e861384ea33ac96d97796e6dcd51640a45f1e13d7b30275860c"} Feb 19 03:25:36.440800 master-0 kubenswrapper[33867]: I0219 03:25:36.440775 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"41e516f80fdbca0ad0fec8609d99373a6c87f6ed69e42cdc14dde997afd65da8"} Feb 19 03:25:36.440800 master-0 kubenswrapper[33867]: I0219 03:25:36.440786 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"15f86a7f9af00cfb660aed0d9de16b4b7b16e42980616991e94ef7198de70052"} Feb 19 03:25:36.440800 master-0 kubenswrapper[33867]: I0219 03:25:36.440797 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"b72fc1a1be5f58b5d59ac3d6f6c214e3a5a59e2746f4da0b54694b182f52c426"} Feb 19 03:25:36.464196 master-0 kubenswrapper[33867]: I0219 03:25:36.464042 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.464014395 podStartE2EDuration="1.464014395s" podCreationTimestamp="2026-02-19 03:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:25:36.461424062 +0000 UTC m=+141.758094693" watchObservedRunningTime="2026-02-19 03:25:36.464014395 +0000 UTC m=+141.760685006" Feb 19 03:25:37.768278 master-0 kubenswrapper[33867]: I0219 03:25:37.768189 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:37.769090 master-0 kubenswrapper[33867]: E0219 03:25:37.768435 33867 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Feb 19 03:25:37.769090 master-0 kubenswrapper[33867]: E0219 03:25:37.768496 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls podName:943c09ec-a2d2-40df-bbdc-351a30b33d79 nodeName:}" failed. No retries permitted until 2026-02-19 03:25:53.768477626 +0000 UTC m=+159.065148237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls") pod "telemeter-client-6df4d685bd-g7b8m" (UID: "943c09ec-a2d2-40df-bbdc-351a30b33d79") : secret "telemeter-client-tls" not found Feb 19 03:25:40.955187 master-0 kubenswrapper[33867]: I0219 03:25:40.955106 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:40.973915 master-0 kubenswrapper[33867]: I0219 03:25:40.973841 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="7a664a01-1a23-499a-958b-597f8f6daf92" Feb 19 03:25:40.973915 master-0 kubenswrapper[33867]: I0219 03:25:40.973898 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="7a664a01-1a23-499a-958b-597f8f6daf92" Feb 19 03:25:40.988988 master-0 kubenswrapper[33867]: I0219 03:25:40.988891 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:25:40.992545 master-0 kubenswrapper[33867]: I0219 03:25:40.992480 33867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:40.996087 master-0 kubenswrapper[33867]: I0219 03:25:40.996027 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:25:41.005235 master-0 kubenswrapper[33867]: I0219 03:25:41.005174 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:41.009373 master-0 kubenswrapper[33867]: I0219 03:25:41.008062 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 19 03:25:41.487076 master-0 kubenswrapper[33867]: I0219 03:25:41.486925 33867 generic.go:334] "Generic (PLEG): container finished" podID="d03a1e6620a92c780b0a91c72a55bc8b" containerID="5478ff6e91e4a9c23697b8480f59ff613677b3f8a98edfa6444d397304d19e71" exitCode=0 Feb 19 03:25:41.487076 master-0 kubenswrapper[33867]: I0219 03:25:41.487003 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerDied","Data":"5478ff6e91e4a9c23697b8480f59ff613677b3f8a98edfa6444d397304d19e71"} Feb 19 03:25:41.487076 master-0 kubenswrapper[33867]: I0219 03:25:41.487048 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"5209bdf4bd3ccb1c25ab5c25c6b8c8080a9c79db4b7629105afec7eb2b959335"} Feb 19 03:25:42.171761 master-0 kubenswrapper[33867]: I0219 03:25:42.171652 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:42.171761 master-0 kubenswrapper[33867]: I0219 03:25:42.171769 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:42.178289 master-0 kubenswrapper[33867]: I0219 03:25:42.177975 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:42.498891 master-0 kubenswrapper[33867]: I0219 03:25:42.498792 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"0dda14245136dde7cfbce705aca30481c965d82583496a27460fc7117cd62ee2"} Feb 19 03:25:42.498891 master-0 kubenswrapper[33867]: I0219 03:25:42.498891 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"02802fe61ae80636203588a5d0be4cf1aa155e75112b4c0d4e9def517d76aca7"} Feb 19 03:25:42.498891 master-0 kubenswrapper[33867]: I0219 03:25:42.498914 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"289cb459b832b2f68e7e7ba0e59841976adc87cc7042001cac17692e52bc0f88"} Feb 19 03:25:42.504149 master-0 kubenswrapper[33867]: I0219 03:25:42.504049 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-66b5846d67-vlng5" Feb 19 03:25:42.523847 master-0 kubenswrapper[33867]: I0219 03:25:42.523602 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.523571235 podStartE2EDuration="2.523571235s" podCreationTimestamp="2026-02-19 03:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:25:42.520755796 +0000 UTC m=+147.817426407" watchObservedRunningTime="2026-02-19 03:25:42.523571235 +0000 UTC m=+147.820241856" Feb 19 03:25:43.508100 master-0 kubenswrapper[33867]: I0219 03:25:43.508016 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:25:44.553755 master-0 kubenswrapper[33867]: I0219 03:25:44.553627 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8" Feb 19 03:25:45.036491 master-0 kubenswrapper[33867]: I0219 03:25:45.036387 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.036491 master-0 kubenswrapper[33867]: I0219 03:25:45.036471 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.036491 master-0 kubenswrapper[33867]: I0219 03:25:45.036506 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.037005 master-0 kubenswrapper[33867]: I0219 03:25:45.036531 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.040742 master-0 kubenswrapper[33867]: I0219 03:25:45.040681 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.042982 master-0 kubenswrapper[33867]: I0219 03:25:45.042905 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.534504 master-0 kubenswrapper[33867]: I0219 03:25:45.534418 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:45.534854 master-0 kubenswrapper[33867]: I0219 03:25:45.534814 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:25:46.725451 master-0 kubenswrapper[33867]: I0219 03:25:46.725359 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:25:46.726925 master-0 kubenswrapper[33867]: E0219 03:25:46.726891 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba0c261-497c-4236-8f14-98ce5c16af59" containerName="installer" Feb 19 03:25:46.727083 master-0 kubenswrapper[33867]: I0219 03:25:46.727064 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba0c261-497c-4236-8f14-98ce5c16af59" containerName="installer" Feb 19 03:25:46.727682 master-0 kubenswrapper[33867]: I0219 03:25:46.727656 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba0c261-497c-4236-8f14-98ce5c16af59" containerName="installer" Feb 19 03:25:46.728813 master-0 kubenswrapper[33867]: I0219 03:25:46.728783 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.733923 master-0 kubenswrapper[33867]: I0219 03:25:46.733892 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-l5ps6" Feb 19 03:25:46.734041 master-0 kubenswrapper[33867]: I0219 03:25:46.733982 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 03:25:46.742770 master-0 kubenswrapper[33867]: I0219 03:25:46.742726 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:25:46.745513 master-0 kubenswrapper[33867]: I0219 03:25:46.745439 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.745513 master-0 kubenswrapper[33867]: I0219 03:25:46.745522 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.745995 master-0 kubenswrapper[33867]: I0219 03:25:46.745670 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.847513 master-0 kubenswrapper[33867]: I0219 03:25:46.847416 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.847513 master-0 kubenswrapper[33867]: I0219 03:25:46.847504 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.847937 master-0 kubenswrapper[33867]: I0219 03:25:46.847553 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.847937 master-0 kubenswrapper[33867]: I0219 03:25:46.847597 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.847937 master-0 kubenswrapper[33867]: I0219 03:25:46.847776 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:46.866385 master-0 kubenswrapper[33867]: I0219 03:25:46.866325 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:47.050450 master-0 kubenswrapper[33867]: I0219 03:25:47.050234 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:25:47.305244 master-0 kubenswrapper[33867]: I0219 03:25:47.305047 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:25:47.544452 master-0 kubenswrapper[33867]: I0219 03:25:47.544364 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6ad84c80-367e-4ca3-a439-dfff469bc349","Type":"ContainerStarted","Data":"59a8c5b35a2b9e301f72a375c4be72ed3623a6ff868a877409bc90712c534f7e"} Feb 19 03:25:48.553190 master-0 kubenswrapper[33867]: I0219 03:25:48.553115 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6ad84c80-367e-4ca3-a439-dfff469bc349","Type":"ContainerStarted","Data":"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674"} Feb 19 03:25:48.571305 master-0 kubenswrapper[33867]: I0219 03:25:48.571195 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.571164072 podStartE2EDuration="2.571164072s" podCreationTimestamp="2026-02-19 03:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:25:48.56964929 +0000 UTC m=+153.866319901" watchObservedRunningTime="2026-02-19 03:25:48.571164072 +0000 UTC m=+153.867834713" Feb 19 03:25:53.766186 master-0 kubenswrapper[33867]: I0219 03:25:53.766073 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm"] Feb 19 03:25:53.774224 master-0 kubenswrapper[33867]: I0219 03:25:53.774157 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:53.783467 master-0 kubenswrapper[33867]: I0219 03:25:53.783427 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 19 03:25:53.783916 master-0 kubenswrapper[33867]: I0219 03:25:53.783506 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-rvbfx" Feb 19 03:25:53.789274 master-0 kubenswrapper[33867]: I0219 03:25:53.789183 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm"] Feb 19 03:25:53.832548 master-0 kubenswrapper[33867]: I0219 03:25:53.832499 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-rb2hx"] Feb 19 03:25:53.833658 master-0 kubenswrapper[33867]: I0219 03:25:53.833640 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.835988 master-0 kubenswrapper[33867]: I0219 03:25:53.835973 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 03:25:53.836297 master-0 kubenswrapper[33867]: I0219 03:25:53.836284 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 03:25:53.837549 master-0 kubenswrapper[33867]: I0219 03:25:53.837533 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 03:25:53.841470 master-0 kubenswrapper[33867]: I0219 03:25:53.841348 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 03:25:53.846404 master-0 kubenswrapper[33867]: I0219 03:25:53.846371 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 03:25:53.860024 master-0 kubenswrapper[33867]: I0219 03:25:53.859957 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-rb2hx"] Feb 19 03:25:53.860608 master-0 kubenswrapper[33867]: I0219 03:25:53.860574 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-config\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.860695 master-0 kubenswrapper[33867]: I0219 03:25:53.860631 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-trusted-ca\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.860695 master-0 kubenswrapper[33867]: I0219 03:25:53.860683 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1c2c9876-4b0b-429d-a3bb-339b1c0bfc75-monitoring-plugin-cert\") pod \"monitoring-plugin-84ff5d7bd8-cdwlm\" (UID: \"1c2c9876-4b0b-429d-a3bb-339b1c0bfc75\") " pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:53.860790 master-0 kubenswrapper[33867]: I0219 03:25:53.860728 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4vv\" (UniqueName: \"kubernetes.io/projected/5f7e6789-3b0b-4117-9d25-55a671e42f93-kube-api-access-2v4vv\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.860790 master-0 kubenswrapper[33867]: I0219 03:25:53.860759 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e6789-3b0b-4117-9d25-55a671e42f93-serving-cert\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.860876 master-0 kubenswrapper[33867]: I0219 03:25:53.860804 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:53.870325 master-0 kubenswrapper[33867]: I0219 03:25:53.870281 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/943c09ec-a2d2-40df-bbdc-351a30b33d79-telemeter-client-tls\") pod \"telemeter-client-6df4d685bd-g7b8m\" (UID: \"943c09ec-a2d2-40df-bbdc-351a30b33d79\") " pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:53.962906 master-0 kubenswrapper[33867]: I0219 03:25:53.962841 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-trusted-ca\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.963167 master-0 kubenswrapper[33867]: I0219 03:25:53.963011 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1c2c9876-4b0b-429d-a3bb-339b1c0bfc75-monitoring-plugin-cert\") pod \"monitoring-plugin-84ff5d7bd8-cdwlm\" (UID: \"1c2c9876-4b0b-429d-a3bb-339b1c0bfc75\") " pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:53.963235 master-0 kubenswrapper[33867]: I0219 03:25:53.963209 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v4vv\" (UniqueName: \"kubernetes.io/projected/5f7e6789-3b0b-4117-9d25-55a671e42f93-kube-api-access-2v4vv\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.963317 master-0 kubenswrapper[33867]: I0219 03:25:53.963297 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e6789-3b0b-4117-9d25-55a671e42f93-serving-cert\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.963429 master-0 kubenswrapper[33867]: I0219 03:25:53.963407 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-config\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.964181 master-0 kubenswrapper[33867]: I0219 03:25:53.964154 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-config\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.964387 master-0 kubenswrapper[33867]: I0219 03:25:53.964348 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e6789-3b0b-4117-9d25-55a671e42f93-trusted-ca\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.968670 master-0 kubenswrapper[33867]: I0219 03:25:53.968339 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e6789-3b0b-4117-9d25-55a671e42f93-serving-cert\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:53.968878 master-0 kubenswrapper[33867]: I0219 03:25:53.968832 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1c2c9876-4b0b-429d-a3bb-339b1c0bfc75-monitoring-plugin-cert\") pod \"monitoring-plugin-84ff5d7bd8-cdwlm\" (UID: \"1c2c9876-4b0b-429d-a3bb-339b1c0bfc75\") " pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:53.989825 master-0 kubenswrapper[33867]: I0219 03:25:53.989788 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v4vv\" (UniqueName: \"kubernetes.io/projected/5f7e6789-3b0b-4117-9d25-55a671e42f93-kube-api-access-2v4vv\") pod \"console-operator-5df5ffc47c-rb2hx\" (UID: \"5f7e6789-3b0b-4117-9d25-55a671e42f93\") " pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:54.031033 master-0 kubenswrapper[33867]: I0219 03:25:54.030911 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" Feb 19 03:25:54.122969 master-0 kubenswrapper[33867]: I0219 03:25:54.122472 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:54.236884 master-0 kubenswrapper[33867]: I0219 03:25:54.236819 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:54.493534 master-0 kubenswrapper[33867]: I0219 03:25:54.493469 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6df4d685bd-g7b8m"] Feb 19 03:25:54.519723 master-0 kubenswrapper[33867]: W0219 03:25:54.519615 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943c09ec_a2d2_40df_bbdc_351a30b33d79.slice/crio-104ed8f3ca39d6c802af9e83ace1cf85c51e1ec8c322e22829ec36821261f272 WatchSource:0}: Error finding container 104ed8f3ca39d6c802af9e83ace1cf85c51e1ec8c322e22829ec36821261f272: Status 404 returned error can't find the container with id 104ed8f3ca39d6c802af9e83ace1cf85c51e1ec8c322e22829ec36821261f272 Feb 19 03:25:54.611638 master-0 kubenswrapper[33867]: I0219 03:25:54.609917 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerStarted","Data":"104ed8f3ca39d6c802af9e83ace1cf85c51e1ec8c322e22829ec36821261f272"} Feb 19 03:25:54.612560 master-0 kubenswrapper[33867]: I0219 03:25:54.611908 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm"] Feb 19 03:25:54.711889 master-0 kubenswrapper[33867]: I0219 03:25:54.711831 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-rb2hx"] Feb 19 03:25:54.720216 master-0 kubenswrapper[33867]: W0219 03:25:54.720146 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7e6789_3b0b_4117_9d25_55a671e42f93.slice/crio-5ff7297fa6866eb8a0c287bb7065888e5bc2608db05ea861e779c1cedf9f3a26 WatchSource:0}: Error finding container 5ff7297fa6866eb8a0c287bb7065888e5bc2608db05ea861e779c1cedf9f3a26: Status 404 returned error can't find the container with id 5ff7297fa6866eb8a0c287bb7065888e5bc2608db05ea861e779c1cedf9f3a26 Feb 19 03:25:55.624043 master-0 kubenswrapper[33867]: I0219 03:25:55.622578 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" event={"ID":"1c2c9876-4b0b-429d-a3bb-339b1c0bfc75","Type":"ContainerStarted","Data":"7b2989891967f1f1cb96d101d9f9c898a1368656df6c1136c60d0a53e40ba232"} Feb 19 03:25:55.625740 master-0 kubenswrapper[33867]: I0219 03:25:55.625712 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" event={"ID":"5f7e6789-3b0b-4117-9d25-55a671e42f93","Type":"ContainerStarted","Data":"5ff7297fa6866eb8a0c287bb7065888e5bc2608db05ea861e779c1cedf9f3a26"} Feb 19 03:25:59.660084 master-0 kubenswrapper[33867]: I0219 03:25:59.660027 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" event={"ID":"5f7e6789-3b0b-4117-9d25-55a671e42f93","Type":"ContainerStarted","Data":"1a47c3f35c541428f2733ab8597e54c9425fac48f3680b545bc45f917ef8e8d3"} Feb 19 03:25:59.661780 master-0 kubenswrapper[33867]: I0219 03:25:59.661341 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:59.668996 master-0 kubenswrapper[33867]: I0219 03:25:59.665730 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/0.log" Feb 19 03:25:59.668996 master-0 kubenswrapper[33867]: I0219 03:25:59.665779 33867 generic.go:334] "Generic (PLEG): container finished" podID="943c09ec-a2d2-40df-bbdc-351a30b33d79" containerID="3d429b05f00a3e3e62f224bd7253a2182c3f9d80d58ad76ae8a5046a7b31e619" exitCode=1 Feb 19 03:25:59.668996 master-0 kubenswrapper[33867]: I0219 03:25:59.665846 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerDied","Data":"3d429b05f00a3e3e62f224bd7253a2182c3f9d80d58ad76ae8a5046a7b31e619"} Feb 19 03:25:59.670373 master-0 kubenswrapper[33867]: I0219 03:25:59.670341 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" Feb 19 03:25:59.671722 master-0 kubenswrapper[33867]: I0219 03:25:59.671677 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" event={"ID":"1c2c9876-4b0b-429d-a3bb-339b1c0bfc75","Type":"ContainerStarted","Data":"b388f88e8a7a35d8237060e433d1d474f5d4e39142217a3f4c09d5b4baca1ee2"} Feb 19 03:25:59.672229 master-0 kubenswrapper[33867]: I0219 03:25:59.672189 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:59.679040 master-0 kubenswrapper[33867]: I0219 03:25:59.678878 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" Feb 19 03:25:59.705887 master-0 kubenswrapper[33867]: I0219 03:25:59.705769 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm" podStartSLOduration=2.5977628040000003 podStartE2EDuration="6.705750808s" podCreationTimestamp="2026-02-19 03:25:53 +0000 UTC" firstStartedPulling="2026-02-19 03:25:54.625360594 +0000 UTC m=+159.922031195" lastFinishedPulling="2026-02-19 03:25:58.733348578 +0000 UTC m=+164.030019199" observedRunningTime="2026-02-19 03:25:59.705012837 +0000 UTC m=+165.001683458" watchObservedRunningTime="2026-02-19 03:25:59.705750808 +0000 UTC m=+165.002421419" Feb 19 03:25:59.711366 master-0 kubenswrapper[33867]: I0219 03:25:59.708838 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-rb2hx" podStartSLOduration=2.6679009110000003 podStartE2EDuration="6.708747542s" podCreationTimestamp="2026-02-19 03:25:53 +0000 UTC" firstStartedPulling="2026-02-19 03:25:54.722741025 +0000 UTC m=+160.019411636" lastFinishedPulling="2026-02-19 03:25:58.763587656 +0000 UTC m=+164.060258267" observedRunningTime="2026-02-19 03:25:59.68085845 +0000 UTC m=+164.977529081" watchObservedRunningTime="2026-02-19 03:25:59.708747542 +0000 UTC m=+165.005418173" Feb 19 03:25:59.798439 master-0 kubenswrapper[33867]: I0219 03:25:59.797563 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-bdf7d"] Feb 19 03:25:59.800293 master-0 kubenswrapper[33867]: I0219 03:25:59.800272 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:25:59.805354 master-0 kubenswrapper[33867]: I0219 03:25:59.805092 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrld\" (UniqueName: \"kubernetes.io/projected/6505205d-23d4-4c99-83ac-e82d298a2805-kube-api-access-brrld\") pod \"downloads-955b69498-bdf7d\" (UID: \"6505205d-23d4-4c99-83ac-e82d298a2805\") " pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:25:59.806188 master-0 kubenswrapper[33867]: I0219 03:25:59.805552 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 03:25:59.806188 master-0 kubenswrapper[33867]: I0219 03:25:59.805828 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 03:25:59.815314 master-0 kubenswrapper[33867]: I0219 03:25:59.815245 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-bdf7d"] Feb 19 03:25:59.906944 master-0 kubenswrapper[33867]: I0219 03:25:59.906875 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrld\" (UniqueName: \"kubernetes.io/projected/6505205d-23d4-4c99-83ac-e82d298a2805-kube-api-access-brrld\") pod \"downloads-955b69498-bdf7d\" (UID: \"6505205d-23d4-4c99-83ac-e82d298a2805\") " pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:25:59.922706 master-0 kubenswrapper[33867]: I0219 03:25:59.922571 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrld\" (UniqueName: \"kubernetes.io/projected/6505205d-23d4-4c99-83ac-e82d298a2805-kube-api-access-brrld\") pod \"downloads-955b69498-bdf7d\" (UID: \"6505205d-23d4-4c99-83ac-e82d298a2805\") " pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:26:00.129291 master-0 kubenswrapper[33867]: I0219 03:26:00.126438 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:26:00.414080 master-0 kubenswrapper[33867]: I0219 03:26:00.410575 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:26:00.943474 master-0 kubenswrapper[33867]: I0219 03:26:00.943410 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-bdf7d"] Feb 19 03:26:01.698627 master-0 kubenswrapper[33867]: I0219 03:26:01.698585 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/0.log" Feb 19 03:26:01.698860 master-0 kubenswrapper[33867]: I0219 03:26:01.698706 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerStarted","Data":"535dfdb7510904514902d726f6ba06771e714a34650706cfe3094b2c9a38a3c0"} Feb 19 03:26:01.698860 master-0 kubenswrapper[33867]: I0219 03:26:01.698769 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerStarted","Data":"692f6d7e8af3a1d4b339f72028befa3708ef2dcef8e73082e90c1d2e5c04321a"} Feb 19 03:26:01.699406 master-0 kubenswrapper[33867]: I0219 03:26:01.699366 33867 scope.go:117] "RemoveContainer" containerID="3d429b05f00a3e3e62f224bd7253a2182c3f9d80d58ad76ae8a5046a7b31e619" Feb 19 03:26:01.699850 master-0 kubenswrapper[33867]: I0219 03:26:01.699798 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-bdf7d" event={"ID":"6505205d-23d4-4c99-83ac-e82d298a2805","Type":"ContainerStarted","Data":"52001f0e0db68a788a638a8567d7ab8fddb1a7886470d790a678bfbcae963268"} Feb 19 03:26:02.714234 master-0 kubenswrapper[33867]: I0219 03:26:02.714181 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/1.log" Feb 19 03:26:02.715239 master-0 kubenswrapper[33867]: I0219 03:26:02.715209 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/0.log" Feb 19 03:26:02.715314 master-0 kubenswrapper[33867]: I0219 03:26:02.715272 33867 generic.go:334] "Generic (PLEG): container finished" podID="943c09ec-a2d2-40df-bbdc-351a30b33d79" containerID="6062c6166d3c0eb26f286482680ccd069d6469711e49f82dbe188387fa9e0e67" exitCode=1 Feb 19 03:26:02.715314 master-0 kubenswrapper[33867]: I0219 03:26:02.715305 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerDied","Data":"6062c6166d3c0eb26f286482680ccd069d6469711e49f82dbe188387fa9e0e67"} Feb 19 03:26:02.715376 master-0 kubenswrapper[33867]: I0219 03:26:02.715345 33867 scope.go:117] "RemoveContainer" containerID="3d429b05f00a3e3e62f224bd7253a2182c3f9d80d58ad76ae8a5046a7b31e619" Feb 19 03:26:02.716103 master-0 kubenswrapper[33867]: I0219 03:26:02.716062 33867 scope.go:117] "RemoveContainer" containerID="6062c6166d3c0eb26f286482680ccd069d6469711e49f82dbe188387fa9e0e67" Feb 19 03:26:02.716648 master-0 kubenswrapper[33867]: E0219 03:26:02.716411 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"telemeter-client\" with CrashLoopBackOff: \"back-off 10s restarting failed container=telemeter-client pod=telemeter-client-6df4d685bd-g7b8m_openshift-monitoring(943c09ec-a2d2-40df-bbdc-351a30b33d79)\"" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" podUID="943c09ec-a2d2-40df-bbdc-351a30b33d79" Feb 19 03:26:02.739116 master-0 kubenswrapper[33867]: I0219 03:26:02.739034 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:26:02.741102 master-0 kubenswrapper[33867]: I0219 03:26:02.739980 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="6ad84c80-367e-4ca3-a439-dfff469bc349" containerName="installer" containerID="cri-o://51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674" gracePeriod=30 Feb 19 03:26:03.733147 master-0 kubenswrapper[33867]: I0219 03:26:03.733091 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/1.log" Feb 19 03:26:03.735063 master-0 kubenswrapper[33867]: I0219 03:26:03.735018 33867 scope.go:117] "RemoveContainer" containerID="6062c6166d3c0eb26f286482680ccd069d6469711e49f82dbe188387fa9e0e67" Feb 19 03:26:03.735990 master-0 kubenswrapper[33867]: E0219 03:26:03.735359 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"telemeter-client\" with CrashLoopBackOff: \"back-off 10s restarting failed container=telemeter-client pod=telemeter-client-6df4d685bd-g7b8m_openshift-monitoring(943c09ec-a2d2-40df-bbdc-351a30b33d79)\"" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" podUID="943c09ec-a2d2-40df-bbdc-351a30b33d79" Feb 19 03:26:06.120383 master-0 kubenswrapper[33867]: I0219 03:26:06.120010 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:26:06.121077 master-0 kubenswrapper[33867]: I0219 03:26:06.121060 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.129674 master-0 kubenswrapper[33867]: I0219 03:26:06.129601 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.130358 master-0 kubenswrapper[33867]: I0219 03:26:06.129710 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.131882 master-0 kubenswrapper[33867]: I0219 03:26:06.131813 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.133663 master-0 kubenswrapper[33867]: I0219 03:26:06.133606 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:26:06.233866 master-0 kubenswrapper[33867]: I0219 03:26:06.233777 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.233866 master-0 kubenswrapper[33867]: I0219 03:26:06.233856 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.234111 master-0 kubenswrapper[33867]: I0219 03:26:06.233917 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.234111 master-0 kubenswrapper[33867]: I0219 03:26:06.234014 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.234111 master-0 kubenswrapper[33867]: I0219 03:26:06.234059 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.258153 master-0 kubenswrapper[33867]: I0219 03:26:06.258088 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access\") pod \"installer-5-master-0\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.509295 master-0 kubenswrapper[33867]: I0219 03:26:06.509212 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:06.978280 master-0 kubenswrapper[33867]: I0219 03:26:06.975765 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:26:06.985977 master-0 kubenswrapper[33867]: W0219 03:26:06.985917 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod17fbcb8d_b3b4_4d0b_bf13_1c2fdd78e212.slice/crio-9d547a193ae77e4df446f997fc168a64a7acd13e67758d515525ea4214178214 WatchSource:0}: Error finding container 9d547a193ae77e4df446f997fc168a64a7acd13e67758d515525ea4214178214: Status 404 returned error can't find the container with id 9d547a193ae77e4df446f997fc168a64a7acd13e67758d515525ea4214178214 Feb 19 03:26:07.017677 master-0 kubenswrapper[33867]: I0219 03:26:07.017623 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:26:07.018309 master-0 kubenswrapper[33867]: I0219 03:26:07.017861 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" containerID="cri-o://1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75" gracePeriod=30 Feb 19 03:26:07.137918 master-0 kubenswrapper[33867]: I0219 03:26:07.137855 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:26:07.139982 master-0 kubenswrapper[33867]: I0219 03:26:07.138116 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" podUID="6acd115e-71e1-4a50-8892-fc6ea2927fec" containerName="route-controller-manager" containerID="cri-o://a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76" gracePeriod=30 Feb 19 03:26:07.388350 master-0 kubenswrapper[33867]: I0219 03:26:07.385231 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:07.389189 master-0 kubenswrapper[33867]: I0219 03:26:07.389013 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.395819 master-0 kubenswrapper[33867]: I0219 03:26:07.395771 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 03:26:07.396019 master-0 kubenswrapper[33867]: I0219 03:26:07.395821 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 03:26:07.396097 master-0 kubenswrapper[33867]: I0219 03:26:07.396079 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 03:26:07.396147 master-0 kubenswrapper[33867]: I0219 03:26:07.396107 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-cddzx" Feb 19 03:26:07.396337 master-0 kubenswrapper[33867]: I0219 03:26:07.396314 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 03:26:07.396507 master-0 kubenswrapper[33867]: I0219 03:26:07.396488 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 03:26:07.409954 master-0 kubenswrapper[33867]: I0219 03:26:07.409594 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456186 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456237 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hr9v\" (UniqueName: \"kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456330 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456612 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456839 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.457328 master-0 kubenswrapper[33867]: I0219 03:26:07.456901 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.572348 master-0 kubenswrapper[33867]: I0219 03:26:07.572285 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.572478 master-0 kubenswrapper[33867]: I0219 03:26:07.572371 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hr9v\" (UniqueName: \"kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.572478 master-0 kubenswrapper[33867]: I0219 03:26:07.572439 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.573396 master-0 kubenswrapper[33867]: I0219 03:26:07.572485 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.573669 master-0 kubenswrapper[33867]: I0219 03:26:07.573636 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.573752 master-0 kubenswrapper[33867]: I0219 03:26:07.573726 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.573864 master-0 kubenswrapper[33867]: I0219 03:26:07.573837 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.575209 master-0 kubenswrapper[33867]: I0219 03:26:07.574826 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.575540 master-0 kubenswrapper[33867]: I0219 03:26:07.575502 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.578012 master-0 kubenswrapper[33867]: I0219 03:26:07.577432 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.590045 master-0 kubenswrapper[33867]: I0219 03:26:07.589999 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.598150 master-0 kubenswrapper[33867]: I0219 03:26:07.598085 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hr9v\" (UniqueName: \"kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v\") pod \"console-74cd99cf84-cpf69\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.661065 master-0 kubenswrapper[33867]: I0219 03:26:07.659683 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:26:07.674516 master-0 kubenswrapper[33867]: I0219 03:26:07.674461 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") pod \"06898300-c6e2-4d64-9ebf-d20f4338cccc\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " Feb 19 03:26:07.674749 master-0 kubenswrapper[33867]: I0219 03:26:07.674528 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") pod \"06898300-c6e2-4d64-9ebf-d20f4338cccc\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " Feb 19 03:26:07.674796 master-0 kubenswrapper[33867]: I0219 03:26:07.674776 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") pod \"06898300-c6e2-4d64-9ebf-d20f4338cccc\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " Feb 19 03:26:07.676373 master-0 kubenswrapper[33867]: I0219 03:26:07.674826 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") pod \"06898300-c6e2-4d64-9ebf-d20f4338cccc\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " Feb 19 03:26:07.676373 master-0 kubenswrapper[33867]: I0219 03:26:07.674875 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") pod \"06898300-c6e2-4d64-9ebf-d20f4338cccc\" (UID: \"06898300-c6e2-4d64-9ebf-d20f4338cccc\") " Feb 19 03:26:07.676373 master-0 kubenswrapper[33867]: I0219 03:26:07.675329 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca" (OuterVolumeSpecName: "client-ca") pod "06898300-c6e2-4d64-9ebf-d20f4338cccc" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:07.676373 master-0 kubenswrapper[33867]: I0219 03:26:07.675342 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "06898300-c6e2-4d64-9ebf-d20f4338cccc" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:07.676373 master-0 kubenswrapper[33867]: I0219 03:26:07.675796 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config" (OuterVolumeSpecName: "config") pod "06898300-c6e2-4d64-9ebf-d20f4338cccc" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:07.678922 master-0 kubenswrapper[33867]: I0219 03:26:07.678854 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j" (OuterVolumeSpecName: "kube-api-access-rnq2j") pod "06898300-c6e2-4d64-9ebf-d20f4338cccc" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc"). InnerVolumeSpecName "kube-api-access-rnq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:07.679065 master-0 kubenswrapper[33867]: I0219 03:26:07.678967 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06898300-c6e2-4d64-9ebf-d20f4338cccc" (UID: "06898300-c6e2-4d64-9ebf-d20f4338cccc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:07.683401 master-0 kubenswrapper[33867]: I0219 03:26:07.683352 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:26:07.746926 master-0 kubenswrapper[33867]: I0219 03:26:07.746844 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:07.776140 master-0 kubenswrapper[33867]: I0219 03:26:07.776099 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") pod \"6acd115e-71e1-4a50-8892-fc6ea2927fec\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " Feb 19 03:26:07.776307 master-0 kubenswrapper[33867]: I0219 03:26:07.776287 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") pod \"6acd115e-71e1-4a50-8892-fc6ea2927fec\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " Feb 19 03:26:07.776423 master-0 kubenswrapper[33867]: I0219 03:26:07.776405 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") pod \"6acd115e-71e1-4a50-8892-fc6ea2927fec\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " Feb 19 03:26:07.776548 master-0 kubenswrapper[33867]: I0219 03:26:07.776532 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") pod \"6acd115e-71e1-4a50-8892-fc6ea2927fec\" (UID: \"6acd115e-71e1-4a50-8892-fc6ea2927fec\") " Feb 19 03:26:07.777198 master-0 kubenswrapper[33867]: I0219 03:26:07.777175 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnq2j\" (UniqueName: \"kubernetes.io/projected/06898300-c6e2-4d64-9ebf-d20f4338cccc-kube-api-access-rnq2j\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.777323 master-0 kubenswrapper[33867]: I0219 03:26:07.777308 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.777410 master-0 kubenswrapper[33867]: I0219 03:26:07.777398 33867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06898300-c6e2-4d64-9ebf-d20f4338cccc-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.777489 master-0 kubenswrapper[33867]: I0219 03:26:07.777476 33867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.777564 master-0 kubenswrapper[33867]: I0219 03:26:07.777552 33867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06898300-c6e2-4d64-9ebf-d20f4338cccc-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.778344 master-0 kubenswrapper[33867]: I0219 03:26:07.778326 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config" (OuterVolumeSpecName: "config") pod "6acd115e-71e1-4a50-8892-fc6ea2927fec" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:07.778823 master-0 kubenswrapper[33867]: I0219 03:26:07.778806 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca" (OuterVolumeSpecName: "client-ca") pod "6acd115e-71e1-4a50-8892-fc6ea2927fec" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:07.780883 master-0 kubenswrapper[33867]: I0219 03:26:07.780463 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6acd115e-71e1-4a50-8892-fc6ea2927fec" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:07.782138 master-0 kubenswrapper[33867]: I0219 03:26:07.782029 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq" (OuterVolumeSpecName: "kube-api-access-dlhnq") pod "6acd115e-71e1-4a50-8892-fc6ea2927fec" (UID: "6acd115e-71e1-4a50-8892-fc6ea2927fec"). InnerVolumeSpecName "kube-api-access-dlhnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:07.782795 master-0 kubenswrapper[33867]: I0219 03:26:07.782734 33867 generic.go:334] "Generic (PLEG): container finished" podID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerID="1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75" exitCode=0 Feb 19 03:26:07.783042 master-0 kubenswrapper[33867]: I0219 03:26:07.782999 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerDied","Data":"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75"} Feb 19 03:26:07.783087 master-0 kubenswrapper[33867]: I0219 03:26:07.783057 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" event={"ID":"06898300-c6e2-4d64-9ebf-d20f4338cccc","Type":"ContainerDied","Data":"9f34b77802d18424b8b09571a545a52e9fcc1be93f02c10a74325b38bef31cc8"} Feb 19 03:26:07.783122 master-0 kubenswrapper[33867]: I0219 03:26:07.783085 33867 scope.go:117] "RemoveContainer" containerID="1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75" Feb 19 03:26:07.783575 master-0 kubenswrapper[33867]: I0219 03:26:07.783549 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx" Feb 19 03:26:07.788883 master-0 kubenswrapper[33867]: I0219 03:26:07.788791 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212","Type":"ContainerStarted","Data":"01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca"} Feb 19 03:26:07.788951 master-0 kubenswrapper[33867]: I0219 03:26:07.788897 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212","Type":"ContainerStarted","Data":"9d547a193ae77e4df446f997fc168a64a7acd13e67758d515525ea4214178214"} Feb 19 03:26:07.791849 master-0 kubenswrapper[33867]: I0219 03:26:07.791694 33867 generic.go:334] "Generic (PLEG): container finished" podID="6acd115e-71e1-4a50-8892-fc6ea2927fec" containerID="a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76" exitCode=0 Feb 19 03:26:07.791985 master-0 kubenswrapper[33867]: I0219 03:26:07.791749 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" Feb 19 03:26:07.792114 master-0 kubenswrapper[33867]: I0219 03:26:07.791752 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" event={"ID":"6acd115e-71e1-4a50-8892-fc6ea2927fec","Type":"ContainerDied","Data":"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76"} Feb 19 03:26:07.792114 master-0 kubenswrapper[33867]: I0219 03:26:07.792095 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk" event={"ID":"6acd115e-71e1-4a50-8892-fc6ea2927fec","Type":"ContainerDied","Data":"75ebc0148d076f2cc0fe06e466687642989770890443a44d9864ba7cf21ec2cd"} Feb 19 03:26:07.814730 master-0 kubenswrapper[33867]: I0219 03:26:07.814576 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.8145533980000002 podStartE2EDuration="1.814553398s" podCreationTimestamp="2026-02-19 03:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:07.813856439 +0000 UTC m=+173.110527050" watchObservedRunningTime="2026-02-19 03:26:07.814553398 +0000 UTC m=+173.111224009" Feb 19 03:26:07.818157 master-0 kubenswrapper[33867]: I0219 03:26:07.818043 33867 scope.go:117] "RemoveContainer" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" Feb 19 03:26:07.833976 master-0 kubenswrapper[33867]: I0219 03:26:07.833902 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:26:07.841141 master-0 kubenswrapper[33867]: I0219 03:26:07.841056 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx"] Feb 19 03:26:07.846307 master-0 kubenswrapper[33867]: I0219 03:26:07.846242 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:26:07.848926 master-0 kubenswrapper[33867]: I0219 03:26:07.848866 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk"] Feb 19 03:26:07.851133 master-0 kubenswrapper[33867]: I0219 03:26:07.851034 33867 scope.go:117] "RemoveContainer" containerID="1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75" Feb 19 03:26:07.854053 master-0 kubenswrapper[33867]: E0219 03:26:07.854025 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75\": container with ID starting with 1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75 not found: ID does not exist" containerID="1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75" Feb 19 03:26:07.854174 master-0 kubenswrapper[33867]: I0219 03:26:07.854070 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75"} err="failed to get container status \"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75\": rpc error: code = NotFound desc = could not find container \"1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75\": container with ID starting with 1da1c08178057114ecb3f754ba448bf6649a324642bb3846d25496518bb20f75 not found: ID does not exist" Feb 19 03:26:07.854174 master-0 kubenswrapper[33867]: I0219 03:26:07.854099 33867 scope.go:117] "RemoveContainer" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" Feb 19 03:26:07.858811 master-0 kubenswrapper[33867]: E0219 03:26:07.858708 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668\": container with ID starting with 8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668 not found: ID does not exist" containerID="8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668" Feb 19 03:26:07.858811 master-0 kubenswrapper[33867]: I0219 03:26:07.858741 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668"} err="failed to get container status \"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668\": rpc error: code = NotFound desc = could not find container \"8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668\": container with ID starting with 8d3347fca4c620117164474c29989987c95e6927258918a03ae4d23dda348668 not found: ID does not exist" Feb 19 03:26:07.858811 master-0 kubenswrapper[33867]: I0219 03:26:07.858757 33867 scope.go:117] "RemoveContainer" containerID="a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76" Feb 19 03:26:07.884947 master-0 kubenswrapper[33867]: I0219 03:26:07.881537 33867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6acd115e-71e1-4a50-8892-fc6ea2927fec-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.884947 master-0 kubenswrapper[33867]: I0219 03:26:07.881580 33867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.884947 master-0 kubenswrapper[33867]: I0219 03:26:07.881591 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlhnq\" (UniqueName: \"kubernetes.io/projected/6acd115e-71e1-4a50-8892-fc6ea2927fec-kube-api-access-dlhnq\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.884947 master-0 kubenswrapper[33867]: I0219 03:26:07.881604 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acd115e-71e1-4a50-8892-fc6ea2927fec-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:07.893660 master-0 kubenswrapper[33867]: I0219 03:26:07.893606 33867 scope.go:117] "RemoveContainer" containerID="a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76" Feb 19 03:26:07.894185 master-0 kubenswrapper[33867]: E0219 03:26:07.894147 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76\": container with ID starting with a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76 not found: ID does not exist" containerID="a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76" Feb 19 03:26:07.894271 master-0 kubenswrapper[33867]: I0219 03:26:07.894186 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76"} err="failed to get container status \"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76\": rpc error: code = NotFound desc = could not find container \"a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76\": container with ID starting with a3841c599ee06d3fb84ed707f5141094a89f6270e8e7ca27d10148057a5b0f76 not found: ID does not exist" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251334 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f5db64649-7zbbm"] Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: E0219 03:26:08.251751 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251767 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: E0219 03:26:08.251784 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251792 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: E0219 03:26:08.251821 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acd115e-71e1-4a50-8892-fc6ea2927fec" containerName="route-controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251829 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acd115e-71e1-4a50-8892-fc6ea2927fec" containerName="route-controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251970 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.251993 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" containerName="controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.252018 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acd115e-71e1-4a50-8892-fc6ea2927fec" containerName="route-controller-manager" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.252635 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.256987 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-26rv4" Feb 19 03:26:08.257368 master-0 kubenswrapper[33867]: I0219 03:26:08.257358 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:26:08.264305 master-0 kubenswrapper[33867]: I0219 03:26:08.258562 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8"] Feb 19 03:26:08.264305 master-0 kubenswrapper[33867]: I0219 03:26:08.259870 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.264305 master-0 kubenswrapper[33867]: I0219 03:26:08.262161 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:08.265715 master-0 kubenswrapper[33867]: I0219 03:26:08.265674 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:26:08.266515 master-0 kubenswrapper[33867]: I0219 03:26:08.266330 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:26:08.266566 master-0 kubenswrapper[33867]: I0219 03:26:08.266541 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:26:08.266645 master-0 kubenswrapper[33867]: I0219 03:26:08.266624 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:26:08.268176 master-0 kubenswrapper[33867]: I0219 03:26:08.268143 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:26:08.272408 master-0 kubenswrapper[33867]: I0219 03:26:08.270641 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5db64649-7zbbm"] Feb 19 03:26:08.272682 master-0 kubenswrapper[33867]: I0219 03:26:08.272565 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:26:08.273615 master-0 kubenswrapper[33867]: I0219 03:26:08.272746 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:26:08.273615 master-0 kubenswrapper[33867]: I0219 03:26:08.273191 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-mfb9m" Feb 19 03:26:08.273615 master-0 kubenswrapper[33867]: I0219 03:26:08.273361 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:26:08.273615 master-0 kubenswrapper[33867]: I0219 03:26:08.273517 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:26:08.281137 master-0 kubenswrapper[33867]: I0219 03:26:08.274923 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8"] Feb 19 03:26:08.287894 master-0 kubenswrapper[33867]: I0219 03:26:08.284285 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.289512 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e15e655c-7da5-4e98-bf24-a749d3585b75-serving-cert\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.289606 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-config\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.289658 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-client-ca\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.289700 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-proxy-ca-bundles\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.289732 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33a962b-7056-4067-8f19-2ba847541a6f-serving-cert\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.290129 master-0 kubenswrapper[33867]: I0219 03:26:08.290057 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-client-ca\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.290374 master-0 kubenswrapper[33867]: I0219 03:26:08.290209 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrkc\" (UniqueName: \"kubernetes.io/projected/c33a962b-7056-4067-8f19-2ba847541a6f-kube-api-access-bqrkc\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.290414 master-0 kubenswrapper[33867]: I0219 03:26:08.290365 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbwj\" (UniqueName: \"kubernetes.io/projected/e15e655c-7da5-4e98-bf24-a749d3585b75-kube-api-access-9lbwj\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.290414 master-0 kubenswrapper[33867]: I0219 03:26:08.290405 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-config\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.396109 master-0 kubenswrapper[33867]: I0219 03:26:08.396035 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e15e655c-7da5-4e98-bf24-a749d3585b75-serving-cert\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.396109 master-0 kubenswrapper[33867]: I0219 03:26:08.396096 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-config\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.396403 master-0 kubenswrapper[33867]: I0219 03:26:08.396127 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-client-ca\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.396403 master-0 kubenswrapper[33867]: I0219 03:26:08.396156 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-proxy-ca-bundles\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.396403 master-0 kubenswrapper[33867]: I0219 03:26:08.396179 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33a962b-7056-4067-8f19-2ba847541a6f-serving-cert\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.396403 master-0 kubenswrapper[33867]: I0219 03:26:08.396225 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-client-ca\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.401411 master-0 kubenswrapper[33867]: I0219 03:26:08.396266 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrkc\" (UniqueName: \"kubernetes.io/projected/c33a962b-7056-4067-8f19-2ba847541a6f-kube-api-access-bqrkc\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.401705 master-0 kubenswrapper[33867]: I0219 03:26:08.401655 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lbwj\" (UniqueName: \"kubernetes.io/projected/e15e655c-7da5-4e98-bf24-a749d3585b75-kube-api-access-9lbwj\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.401794 master-0 kubenswrapper[33867]: I0219 03:26:08.401775 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-config\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.411188 master-0 kubenswrapper[33867]: I0219 03:26:08.411132 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-config\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.419513 master-0 kubenswrapper[33867]: I0219 03:26:08.418543 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-config\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.419513 master-0 kubenswrapper[33867]: I0219 03:26:08.419370 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-proxy-ca-bundles\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.419513 master-0 kubenswrapper[33867]: I0219 03:26:08.419465 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c33a962b-7056-4067-8f19-2ba847541a6f-client-ca\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.420589 master-0 kubenswrapper[33867]: I0219 03:26:08.419851 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e15e655c-7da5-4e98-bf24-a749d3585b75-client-ca\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.423364 master-0 kubenswrapper[33867]: I0219 03:26:08.421583 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33a962b-7056-4067-8f19-2ba847541a6f-serving-cert\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.424830 master-0 kubenswrapper[33867]: I0219 03:26:08.424680 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e15e655c-7da5-4e98-bf24-a749d3585b75-serving-cert\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.444646 master-0 kubenswrapper[33867]: I0219 03:26:08.444572 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lbwj\" (UniqueName: \"kubernetes.io/projected/e15e655c-7da5-4e98-bf24-a749d3585b75-kube-api-access-9lbwj\") pod \"controller-manager-6f5db64649-7zbbm\" (UID: \"e15e655c-7da5-4e98-bf24-a749d3585b75\") " pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.462149 master-0 kubenswrapper[33867]: I0219 03:26:08.461884 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrkc\" (UniqueName: \"kubernetes.io/projected/c33a962b-7056-4067-8f19-2ba847541a6f-kube-api-access-bqrkc\") pod \"route-controller-manager-d5789dcc6-s8xw8\" (UID: \"c33a962b-7056-4067-8f19-2ba847541a6f\") " pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.528390 master-0 kubenswrapper[33867]: I0219 03:26:08.527929 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:08.547830 master-0 kubenswrapper[33867]: I0219 03:26:08.547769 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:08.811019 master-0 kubenswrapper[33867]: I0219 03:26:08.810234 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74cd99cf84-cpf69" event={"ID":"89199d30-e6ec-4748-80d2-9edaf1b3dfc9","Type":"ContainerStarted","Data":"dbcbfe4c8cf4477f3e3755e5c50e43f5c7c4102882f492a6d47199930140b3e6"} Feb 19 03:26:08.976522 master-0 kubenswrapper[33867]: I0219 03:26:08.976454 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06898300-c6e2-4d64-9ebf-d20f4338cccc" path="/var/lib/kubelet/pods/06898300-c6e2-4d64-9ebf-d20f4338cccc/volumes" Feb 19 03:26:08.977727 master-0 kubenswrapper[33867]: I0219 03:26:08.977703 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acd115e-71e1-4a50-8892-fc6ea2927fec" path="/var/lib/kubelet/pods/6acd115e-71e1-4a50-8892-fc6ea2927fec/volumes" Feb 19 03:26:09.022371 master-0 kubenswrapper[33867]: I0219 03:26:09.017711 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8"] Feb 19 03:26:09.036746 master-0 kubenswrapper[33867]: W0219 03:26:09.036655 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33a962b_7056_4067_8f19_2ba847541a6f.slice/crio-266818a2ccf9f577120b874f2fab466b0013fff87c6fcc04a5e02baffe9d2812 WatchSource:0}: Error finding container 266818a2ccf9f577120b874f2fab466b0013fff87c6fcc04a5e02baffe9d2812: Status 404 returned error can't find the container with id 266818a2ccf9f577120b874f2fab466b0013fff87c6fcc04a5e02baffe9d2812 Feb 19 03:26:09.612315 master-0 kubenswrapper[33867]: I0219 03:26:09.612217 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5db64649-7zbbm"] Feb 19 03:26:09.622659 master-0 kubenswrapper[33867]: W0219 03:26:09.622590 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode15e655c_7da5_4e98_bf24_a749d3585b75.slice/crio-ceefba98de6d802e91e3416c4358ecbff91e01631ea9cb8d2c212f413abc519e WatchSource:0}: Error finding container ceefba98de6d802e91e3416c4358ecbff91e01631ea9cb8d2c212f413abc519e: Status 404 returned error can't find the container with id ceefba98de6d802e91e3416c4358ecbff91e01631ea9cb8d2c212f413abc519e Feb 19 03:26:09.847055 master-0 kubenswrapper[33867]: I0219 03:26:09.846774 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" event={"ID":"c33a962b-7056-4067-8f19-2ba847541a6f","Type":"ContainerStarted","Data":"0ccfd467f5c707c9fbfc675001ba950e5bdcce1a0d7820691e87ed8248aa5691"} Feb 19 03:26:09.847055 master-0 kubenswrapper[33867]: I0219 03:26:09.846851 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" event={"ID":"c33a962b-7056-4067-8f19-2ba847541a6f","Type":"ContainerStarted","Data":"266818a2ccf9f577120b874f2fab466b0013fff87c6fcc04a5e02baffe9d2812"} Feb 19 03:26:09.847414 master-0 kubenswrapper[33867]: I0219 03:26:09.847376 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:09.851107 master-0 kubenswrapper[33867]: I0219 03:26:09.850835 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" event={"ID":"e15e655c-7da5-4e98-bf24-a749d3585b75","Type":"ContainerStarted","Data":"ceefba98de6d802e91e3416c4358ecbff91e01631ea9cb8d2c212f413abc519e"} Feb 19 03:26:10.596298 master-0 kubenswrapper[33867]: I0219 03:26:10.595470 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" Feb 19 03:26:10.647318 master-0 kubenswrapper[33867]: I0219 03:26:10.646832 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8" podStartSLOduration=3.646807285 podStartE2EDuration="3.646807285s" podCreationTimestamp="2026-02-19 03:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:10.568750907 +0000 UTC m=+175.865421518" watchObservedRunningTime="2026-02-19 03:26:10.646807285 +0000 UTC m=+175.943477896" Feb 19 03:26:10.755328 master-0 kubenswrapper[33867]: I0219 03:26:10.751194 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-677f65b5df-p8qrj"] Feb 19 03:26:10.755328 master-0 kubenswrapper[33867]: I0219 03:26:10.753017 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.781374 master-0 kubenswrapper[33867]: I0219 03:26:10.781290 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-677f65b5df-p8qrj"] Feb 19 03:26:10.798226 master-0 kubenswrapper[33867]: I0219 03:26:10.795965 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 03:26:10.854692 master-0 kubenswrapper[33867]: I0219 03:26:10.853493 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:26:10.864742 master-0 kubenswrapper[33867]: I0219 03:26:10.862284 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.867515 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.867584 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.867635 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.870641 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.870852 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.870939 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.870999 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lnmx\" (UniqueName: \"kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871161 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871222 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871239 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871426 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-qlddr" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871455 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871654 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871714 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.871816 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 19 03:26:10.876320 master-0 kubenswrapper[33867]: I0219 03:26:10.874335 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 19 03:26:10.896344 master-0 kubenswrapper[33867]: I0219 03:26:10.891415 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 19 03:26:10.940284 master-0 kubenswrapper[33867]: I0219 03:26:10.936108 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" event={"ID":"e15e655c-7da5-4e98-bf24-a749d3585b75","Type":"ContainerStarted","Data":"eebc616b95b7c7212670ebeb7269278f732e606ce18ab6c00cdb57d2f9119c0c"} Feb 19 03:26:10.940284 master-0 kubenswrapper[33867]: I0219 03:26:10.938084 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973299 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973392 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973412 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973439 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973467 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973537 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973568 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973592 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973609 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973644 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lnmx\" (UniqueName: \"kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973665 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973684 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973701 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r5sh\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973722 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973738 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973824 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973854 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973876 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.975320 master-0 kubenswrapper[33867]: I0219 03:26:10.973896 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.976503 master-0 kubenswrapper[33867]: I0219 03:26:10.976472 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.978386 master-0 kubenswrapper[33867]: I0219 03:26:10.977940 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.982014 master-0 kubenswrapper[33867]: I0219 03:26:10.979307 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.982295 master-0 kubenswrapper[33867]: I0219 03:26:10.982192 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.984736 master-0 kubenswrapper[33867]: I0219 03:26:10.984621 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:10.998275 master-0 kubenswrapper[33867]: I0219 03:26:10.993187 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" podStartSLOduration=3.993160098 podStartE2EDuration="3.993160098s" podCreationTimestamp="2026-02-19 03:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:10.966101709 +0000 UTC m=+176.262772330" watchObservedRunningTime="2026-02-19 03:26:10.993160098 +0000 UTC m=+176.289830709" Feb 19 03:26:11.010225 master-0 kubenswrapper[33867]: I0219 03:26:11.008402 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:11.026373 master-0 kubenswrapper[33867]: I0219 03:26:11.024338 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f5db64649-7zbbm" Feb 19 03:26:11.028101 master-0 kubenswrapper[33867]: I0219 03:26:11.028063 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lnmx\" (UniqueName: \"kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx\") pod \"console-677f65b5df-p8qrj\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:11.080150 master-0 kubenswrapper[33867]: I0219 03:26:11.080052 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085743 master-0 kubenswrapper[33867]: I0219 03:26:11.085564 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085743 master-0 kubenswrapper[33867]: I0219 03:26:11.085705 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085758 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085801 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085824 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085880 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085905 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.085925 master-0 kubenswrapper[33867]: I0219 03:26:11.085927 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r5sh\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.086173 master-0 kubenswrapper[33867]: I0219 03:26:11.085960 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.086173 master-0 kubenswrapper[33867]: I0219 03:26:11.085979 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.086173 master-0 kubenswrapper[33867]: I0219 03:26:11.086104 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.087190 master-0 kubenswrapper[33867]: I0219 03:26:11.087146 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.090352 master-0 kubenswrapper[33867]: I0219 03:26:11.090315 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.096808 master-0 kubenswrapper[33867]: I0219 03:26:11.096154 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.097502 master-0 kubenswrapper[33867]: I0219 03:26:11.097090 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.097502 master-0 kubenswrapper[33867]: I0219 03:26:11.097439 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.102421 master-0 kubenswrapper[33867]: I0219 03:26:11.101816 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:11.103528 master-0 kubenswrapper[33867]: I0219 03:26:11.103418 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.109040 master-0 kubenswrapper[33867]: I0219 03:26:11.108956 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.109920 master-0 kubenswrapper[33867]: I0219 03:26:11.109870 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.112820 master-0 kubenswrapper[33867]: I0219 03:26:11.111983 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.112820 master-0 kubenswrapper[33867]: I0219 03:26:11.112292 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.112820 master-0 kubenswrapper[33867]: I0219 03:26:11.112795 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.144580 master-0 kubenswrapper[33867]: I0219 03:26:11.144166 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r5sh\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh\") pod \"alertmanager-main-0\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:11.231037 master-0 kubenswrapper[33867]: I0219 03:26:11.230936 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:26:13.719198 master-0 kubenswrapper[33867]: I0219 03:26:13.719108 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:26:13.723503 master-0 kubenswrapper[33867]: I0219 03:26:13.723451 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.727648 master-0 kubenswrapper[33867]: I0219 03:26:13.727598 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 19 03:26:13.728637 master-0 kubenswrapper[33867]: I0219 03:26:13.728609 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 19 03:26:13.728776 master-0 kubenswrapper[33867]: I0219 03:26:13.728723 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 19 03:26:13.728947 master-0 kubenswrapper[33867]: I0219 03:26:13.728753 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-25h6f" Feb 19 03:26:13.729093 master-0 kubenswrapper[33867]: I0219 03:26:13.729059 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 19 03:26:13.729159 master-0 kubenswrapper[33867]: I0219 03:26:13.729136 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 19 03:26:13.729237 master-0 kubenswrapper[33867]: I0219 03:26:13.729199 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1e3s0akbul7uf" Feb 19 03:26:13.729318 master-0 kubenswrapper[33867]: I0219 03:26:13.728914 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 19 03:26:13.729643 master-0 kubenswrapper[33867]: I0219 03:26:13.729616 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 19 03:26:13.729847 master-0 kubenswrapper[33867]: I0219 03:26:13.729824 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 19 03:26:13.730549 master-0 kubenswrapper[33867]: I0219 03:26:13.730519 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 19 03:26:13.738155 master-0 kubenswrapper[33867]: I0219 03:26:13.738043 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 19 03:26:13.768177 master-0 kubenswrapper[33867]: I0219 03:26:13.764352 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.784679 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792528 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792686 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792738 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792764 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792828 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.792902 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793009 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793104 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793169 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793674 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793752 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793799 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793855 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793893 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.793972 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.794018 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gc2q\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.794135 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.795321 master-0 kubenswrapper[33867]: I0219 03:26:13.794171 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.895828 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.895968 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896006 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896079 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896103 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896127 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896148 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896233 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896271 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gc2q\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896311 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896332 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896358 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896379 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896401 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896416 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896437 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896460 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.897294 master-0 kubenswrapper[33867]: I0219 03:26:13.896491 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.898343 master-0 kubenswrapper[33867]: I0219 03:26:13.898133 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.900137 master-0 kubenswrapper[33867]: I0219 03:26:13.900104 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.900277 master-0 kubenswrapper[33867]: I0219 03:26:13.900220 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.900907 master-0 kubenswrapper[33867]: I0219 03:26:13.900489 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.902801 master-0 kubenswrapper[33867]: I0219 03:26:13.901055 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.914196 master-0 kubenswrapper[33867]: I0219 03:26:13.914124 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.914651 master-0 kubenswrapper[33867]: I0219 03:26:13.914616 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.917266 master-0 kubenswrapper[33867]: I0219 03:26:13.915375 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.917266 master-0 kubenswrapper[33867]: I0219 03:26:13.915461 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.917266 master-0 kubenswrapper[33867]: I0219 03:26:13.915491 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.917266 master-0 kubenswrapper[33867]: I0219 03:26:13.916362 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.917266 master-0 kubenswrapper[33867]: I0219 03:26:13.917035 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.918333 master-0 kubenswrapper[33867]: I0219 03:26:13.917872 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gc2q\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.918333 master-0 kubenswrapper[33867]: I0219 03:26:13.918286 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.918429 master-0 kubenswrapper[33867]: I0219 03:26:13.918330 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.918860 master-0 kubenswrapper[33867]: I0219 03:26:13.918802 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.920339 master-0 kubenswrapper[33867]: I0219 03:26:13.920243 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:13.924763 master-0 kubenswrapper[33867]: I0219 03:26:13.924726 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:14.115090 master-0 kubenswrapper[33867]: I0219 03:26:14.114883 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:14.960916 master-0 kubenswrapper[33867]: I0219 03:26:14.959943 33867 scope.go:117] "RemoveContainer" containerID="d0fbcab1791c1fa93d0b8382e393526b12e53a1efcdb373eae2fce501c101408" Feb 19 03:26:16.068044 master-0 kubenswrapper[33867]: I0219 03:26:16.067895 33867 scope.go:117] "RemoveContainer" containerID="0cf7d392da6a301b93f30bcc03748c612e502b9e965838935f8e427396fbdf21" Feb 19 03:26:16.206636 master-0 kubenswrapper[33867]: I0219 03:26:16.206562 33867 scope.go:117] "RemoveContainer" containerID="d4ec4e49d4dd98a02afe5ae82b828a0c598d3a1b8c49a3c9012f434a6bee2385" Feb 19 03:26:16.797436 master-0 kubenswrapper[33867]: I0219 03:26:16.797191 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-677f65b5df-p8qrj"] Feb 19 03:26:16.815093 master-0 kubenswrapper[33867]: W0219 03:26:16.815010 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode376877b_f5c6_4a73_a959_cde9c466252a.slice/crio-a0089720bb00eccd65042b4f592ae5d2fdd2d08c6dfab13c05bbca8f8764d382 WatchSource:0}: Error finding container a0089720bb00eccd65042b4f592ae5d2fdd2d08c6dfab13c05bbca8f8764d382: Status 404 returned error can't find the container with id a0089720bb00eccd65042b4f592ae5d2fdd2d08c6dfab13c05bbca8f8764d382 Feb 19 03:26:16.819321 master-0 kubenswrapper[33867]: I0219 03:26:16.819240 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:26:16.828826 master-0 kubenswrapper[33867]: W0219 03:26:16.828659 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a1a372_6b54_4903_a7de_cce85bd4c904.slice/crio-266e24246c059d07473e58e23e2e87821a0feae386cac298b824a0fa5596f7d8 WatchSource:0}: Error finding container 266e24246c059d07473e58e23e2e87821a0feae386cac298b824a0fa5596f7d8: Status 404 returned error can't find the container with id 266e24246c059d07473e58e23e2e87821a0feae386cac298b824a0fa5596f7d8 Feb 19 03:26:16.955579 master-0 kubenswrapper[33867]: I0219 03:26:16.955536 33867 scope.go:117] "RemoveContainer" containerID="6062c6166d3c0eb26f286482680ccd069d6469711e49f82dbe188387fa9e0e67" Feb 19 03:26:17.019861 master-0 kubenswrapper[33867]: I0219 03:26:17.019745 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:26:17.045372 master-0 kubenswrapper[33867]: I0219 03:26:17.044590 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-677f65b5df-p8qrj" event={"ID":"e376877b-f5c6-4a73-a959-cde9c466252a","Type":"ContainerStarted","Data":"a0089720bb00eccd65042b4f592ae5d2fdd2d08c6dfab13c05bbca8f8764d382"} Feb 19 03:26:17.045372 master-0 kubenswrapper[33867]: W0219 03:26:17.044721 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb558ca3e_01df_4a0a_8f76_e81247053c03.slice/crio-b66494c48119740bc6edfb285e35655e715735720f72b6bb4c3bc84ad9b7f5c0 WatchSource:0}: Error finding container b66494c48119740bc6edfb285e35655e715735720f72b6bb4c3bc84ad9b7f5c0: Status 404 returned error can't find the container with id b66494c48119740bc6edfb285e35655e715735720f72b6bb4c3bc84ad9b7f5c0 Feb 19 03:26:17.051996 master-0 kubenswrapper[33867]: I0219 03:26:17.051919 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74cd99cf84-cpf69" event={"ID":"89199d30-e6ec-4748-80d2-9edaf1b3dfc9","Type":"ContainerStarted","Data":"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e"} Feb 19 03:26:17.070608 master-0 kubenswrapper[33867]: I0219 03:26:17.067820 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"266e24246c059d07473e58e23e2e87821a0feae386cac298b824a0fa5596f7d8"} Feb 19 03:26:17.087690 master-0 kubenswrapper[33867]: I0219 03:26:17.087402 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74cd99cf84-cpf69" podStartSLOduration=2.144563807 podStartE2EDuration="10.087360443s" podCreationTimestamp="2026-02-19 03:26:07 +0000 UTC" firstStartedPulling="2026-02-19 03:26:08.274631281 +0000 UTC m=+173.571301892" lastFinishedPulling="2026-02-19 03:26:16.217427917 +0000 UTC m=+181.514098528" observedRunningTime="2026-02-19 03:26:17.081138738 +0000 UTC m=+182.377809369" watchObservedRunningTime="2026-02-19 03:26:17.087360443 +0000 UTC m=+182.384031054" Feb 19 03:26:17.747749 master-0 kubenswrapper[33867]: I0219 03:26:17.747660 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:17.747749 master-0 kubenswrapper[33867]: I0219 03:26:17.747735 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:17.750421 master-0 kubenswrapper[33867]: I0219 03:26:17.750345 33867 patch_prober.go:28] interesting pod/console-74cd99cf84-cpf69 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Feb 19 03:26:17.750518 master-0 kubenswrapper[33867]: I0219 03:26:17.750458 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-74cd99cf84-cpf69" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Feb 19 03:26:18.092424 master-0 kubenswrapper[33867]: I0219 03:26:18.092214 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-677f65b5df-p8qrj" event={"ID":"e376877b-f5c6-4a73-a959-cde9c466252a","Type":"ContainerStarted","Data":"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c"} Feb 19 03:26:18.097547 master-0 kubenswrapper[33867]: I0219 03:26:18.097499 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/1.log" Feb 19 03:26:18.098993 master-0 kubenswrapper[33867]: I0219 03:26:18.098923 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" event={"ID":"943c09ec-a2d2-40df-bbdc-351a30b33d79","Type":"ContainerStarted","Data":"19ebca3d141cfa2aece84fb4f3189d06f8ab6ccdd98e81902a03a5b31f210703"} Feb 19 03:26:18.102404 master-0 kubenswrapper[33867]: I0219 03:26:18.102337 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe" exitCode=0 Feb 19 03:26:18.102547 master-0 kubenswrapper[33867]: I0219 03:26:18.102436 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe"} Feb 19 03:26:18.102547 master-0 kubenswrapper[33867]: I0219 03:26:18.102520 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"b66494c48119740bc6edfb285e35655e715735720f72b6bb4c3bc84ad9b7f5c0"} Feb 19 03:26:18.104189 master-0 kubenswrapper[33867]: I0219 03:26:18.104135 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" exitCode=0 Feb 19 03:26:18.105528 master-0 kubenswrapper[33867]: I0219 03:26:18.105495 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} Feb 19 03:26:18.119955 master-0 kubenswrapper[33867]: I0219 03:26:18.118618 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-677f65b5df-p8qrj" podStartSLOduration=8.118589402 podStartE2EDuration="8.118589402s" podCreationTimestamp="2026-02-19 03:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:18.116583516 +0000 UTC m=+183.413254147" watchObservedRunningTime="2026-02-19 03:26:18.118589402 +0000 UTC m=+183.415260023" Feb 19 03:26:18.157663 master-0 kubenswrapper[33867]: I0219 03:26:18.156307 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6df4d685bd-g7b8m" podStartSLOduration=76.111070018 podStartE2EDuration="1m22.156283319s" podCreationTimestamp="2026-02-19 03:24:56 +0000 UTC" firstStartedPulling="2026-02-19 03:25:54.523341453 +0000 UTC m=+159.820012064" lastFinishedPulling="2026-02-19 03:26:00.568554754 +0000 UTC m=+165.865225365" observedRunningTime="2026-02-19 03:26:18.146231947 +0000 UTC m=+183.442902578" watchObservedRunningTime="2026-02-19 03:26:18.156283319 +0000 UTC m=+183.452953920" Feb 19 03:26:18.874181 master-0 kubenswrapper[33867]: I0219 03:26:18.874128 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_6ad84c80-367e-4ca3-a439-dfff469bc349/installer/0.log" Feb 19 03:26:18.874527 master-0 kubenswrapper[33867]: I0219 03:26:18.874213 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:26:18.936964 master-0 kubenswrapper[33867]: I0219 03:26:18.936822 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock\") pod \"6ad84c80-367e-4ca3-a439-dfff469bc349\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " Feb 19 03:26:18.938653 master-0 kubenswrapper[33867]: I0219 03:26:18.937220 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock" (OuterVolumeSpecName: "var-lock") pod "6ad84c80-367e-4ca3-a439-dfff469bc349" (UID: "6ad84c80-367e-4ca3-a439-dfff469bc349"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:18.938653 master-0 kubenswrapper[33867]: I0219 03:26:18.937505 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir\") pod \"6ad84c80-367e-4ca3-a439-dfff469bc349\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " Feb 19 03:26:18.938653 master-0 kubenswrapper[33867]: I0219 03:26:18.937575 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ad84c80-367e-4ca3-a439-dfff469bc349" (UID: "6ad84c80-367e-4ca3-a439-dfff469bc349"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:18.938653 master-0 kubenswrapper[33867]: I0219 03:26:18.937594 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access\") pod \"6ad84c80-367e-4ca3-a439-dfff469bc349\" (UID: \"6ad84c80-367e-4ca3-a439-dfff469bc349\") " Feb 19 03:26:18.938854 master-0 kubenswrapper[33867]: I0219 03:26:18.938814 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:18.938854 master-0 kubenswrapper[33867]: I0219 03:26:18.938834 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ad84c80-367e-4ca3-a439-dfff469bc349-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:18.945348 master-0 kubenswrapper[33867]: I0219 03:26:18.942402 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ad84c80-367e-4ca3-a439-dfff469bc349" (UID: "6ad84c80-367e-4ca3-a439-dfff469bc349"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:19.041905 master-0 kubenswrapper[33867]: I0219 03:26:19.040774 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ad84c80-367e-4ca3-a439-dfff469bc349-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:19.121410 master-0 kubenswrapper[33867]: I0219 03:26:19.119829 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_6ad84c80-367e-4ca3-a439-dfff469bc349/installer/0.log" Feb 19 03:26:19.121410 master-0 kubenswrapper[33867]: I0219 03:26:19.119922 33867 generic.go:334] "Generic (PLEG): container finished" podID="6ad84c80-367e-4ca3-a439-dfff469bc349" containerID="51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674" exitCode=1 Feb 19 03:26:19.122404 master-0 kubenswrapper[33867]: I0219 03:26:19.121846 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6ad84c80-367e-4ca3-a439-dfff469bc349","Type":"ContainerDied","Data":"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674"} Feb 19 03:26:19.122404 master-0 kubenswrapper[33867]: I0219 03:26:19.121944 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6ad84c80-367e-4ca3-a439-dfff469bc349","Type":"ContainerDied","Data":"59a8c5b35a2b9e301f72a375c4be72ed3623a6ff868a877409bc90712c534f7e"} Feb 19 03:26:19.122404 master-0 kubenswrapper[33867]: I0219 03:26:19.121967 33867 scope.go:117] "RemoveContainer" containerID="51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674" Feb 19 03:26:19.122635 master-0 kubenswrapper[33867]: I0219 03:26:19.122500 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 19 03:26:19.158385 master-0 kubenswrapper[33867]: I0219 03:26:19.157941 33867 scope.go:117] "RemoveContainer" containerID="51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674" Feb 19 03:26:19.159037 master-0 kubenswrapper[33867]: E0219 03:26:19.158969 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674\": container with ID starting with 51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674 not found: ID does not exist" containerID="51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674" Feb 19 03:26:19.159113 master-0 kubenswrapper[33867]: I0219 03:26:19.159046 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674"} err="failed to get container status \"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674\": rpc error: code = NotFound desc = could not find container \"51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674\": container with ID starting with 51fe24401ae8bfbaaa513fc03528f166e9d5c090d1eaac133ab06374e9cdb674 not found: ID does not exist" Feb 19 03:26:19.177831 master-0 kubenswrapper[33867]: I0219 03:26:19.177518 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:26:19.181947 master-0 kubenswrapper[33867]: I0219 03:26:19.180625 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 19 03:26:20.966812 master-0 kubenswrapper[33867]: I0219 03:26:20.966412 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad84c80-367e-4ca3-a439-dfff469bc349" path="/var/lib/kubelet/pods/6ad84c80-367e-4ca3-a439-dfff469bc349/volumes" Feb 19 03:26:21.103478 master-0 kubenswrapper[33867]: I0219 03:26:21.103282 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:21.103478 master-0 kubenswrapper[33867]: I0219 03:26:21.103357 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:26:21.105198 master-0 kubenswrapper[33867]: I0219 03:26:21.105088 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:26:21.105344 master-0 kubenswrapper[33867]: I0219 03:26:21.105292 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:26:21.150483 master-0 kubenswrapper[33867]: I0219 03:26:21.150344 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956"} Feb 19 03:26:21.301608 master-0 kubenswrapper[33867]: I0219 03:26:21.301526 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:21.341664 master-0 kubenswrapper[33867]: I0219 03:26:21.341581 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b9ffbb744-xzn8r"] Feb 19 03:26:21.342140 master-0 kubenswrapper[33867]: E0219 03:26:21.342121 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad84c80-367e-4ca3-a439-dfff469bc349" containerName="installer" Feb 19 03:26:21.342287 master-0 kubenswrapper[33867]: I0219 03:26:21.342147 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad84c80-367e-4ca3-a439-dfff469bc349" containerName="installer" Feb 19 03:26:21.342540 master-0 kubenswrapper[33867]: I0219 03:26:21.342505 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad84c80-367e-4ca3-a439-dfff469bc349" containerName="installer" Feb 19 03:26:21.343414 master-0 kubenswrapper[33867]: I0219 03:26:21.343385 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.363165 master-0 kubenswrapper[33867]: I0219 03:26:21.361835 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b9ffbb744-xzn8r"] Feb 19 03:26:21.399161 master-0 kubenswrapper[33867]: I0219 03:26:21.399079 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.399837 master-0 kubenswrapper[33867]: I0219 03:26:21.399811 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.400295 master-0 kubenswrapper[33867]: I0219 03:26:21.400182 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.400466 master-0 kubenswrapper[33867]: I0219 03:26:21.400381 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.400544 master-0 kubenswrapper[33867]: I0219 03:26:21.400467 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.400544 master-0 kubenswrapper[33867]: I0219 03:26:21.400539 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8bb\" (UniqueName: \"kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.400693 master-0 kubenswrapper[33867]: I0219 03:26:21.400635 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503069 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503154 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503196 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503217 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503242 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503282 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp8bb\" (UniqueName: \"kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.503322 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.504310 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.505324 master-0 kubenswrapper[33867]: I0219 03:26:21.505216 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.506359 master-0 kubenswrapper[33867]: I0219 03:26:21.506103 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.507885 master-0 kubenswrapper[33867]: I0219 03:26:21.507799 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.509381 master-0 kubenswrapper[33867]: I0219 03:26:21.509347 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.509528 master-0 kubenswrapper[33867]: I0219 03:26:21.509477 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.531904 master-0 kubenswrapper[33867]: I0219 03:26:21.531851 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp8bb\" (UniqueName: \"kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb\") pod \"console-6b9ffbb744-xzn8r\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:21.670521 master-0 kubenswrapper[33867]: I0219 03:26:21.670458 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:23.205281 master-0 kubenswrapper[33867]: I0219 03:26:23.201609 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38"} Feb 19 03:26:23.205281 master-0 kubenswrapper[33867]: I0219 03:26:23.201691 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678"} Feb 19 03:26:23.210023 master-0 kubenswrapper[33867]: I0219 03:26:23.209970 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} Feb 19 03:26:23.210143 master-0 kubenswrapper[33867]: I0219 03:26:23.210062 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} Feb 19 03:26:23.227223 master-0 kubenswrapper[33867]: I0219 03:26:23.226855 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b9ffbb744-xzn8r"] Feb 19 03:26:24.224250 master-0 kubenswrapper[33867]: I0219 03:26:24.223312 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9ffbb744-xzn8r" event={"ID":"a34af636-294e-431e-b676-6d059a537a5b","Type":"ContainerStarted","Data":"1507f3301a489c41c7f28d7a3a64ce252dad3d07f1f5f8d438e4f999db94eda9"} Feb 19 03:26:24.224250 master-0 kubenswrapper[33867]: I0219 03:26:24.223390 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9ffbb744-xzn8r" event={"ID":"a34af636-294e-431e-b676-6d059a537a5b","Type":"ContainerStarted","Data":"99cbd10267dd864d9c718d3c2ab7213cd2b03aa0ddcfe5b7a47cc10995b035b7"} Feb 19 03:26:24.228308 master-0 kubenswrapper[33867]: I0219 03:26:24.227731 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df"} Feb 19 03:26:24.228308 master-0 kubenswrapper[33867]: I0219 03:26:24.227805 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14"} Feb 19 03:26:24.228308 master-0 kubenswrapper[33867]: I0219 03:26:24.227820 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerStarted","Data":"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb"} Feb 19 03:26:24.233388 master-0 kubenswrapper[33867]: I0219 03:26:24.233362 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} Feb 19 03:26:24.233486 master-0 kubenswrapper[33867]: I0219 03:26:24.233473 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} Feb 19 03:26:24.233566 master-0 kubenswrapper[33867]: I0219 03:26:24.233550 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} Feb 19 03:26:24.233671 master-0 kubenswrapper[33867]: I0219 03:26:24.233646 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerStarted","Data":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} Feb 19 03:26:24.254046 master-0 kubenswrapper[33867]: I0219 03:26:24.253425 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b9ffbb744-xzn8r" podStartSLOduration=3.253368214 podStartE2EDuration="3.253368214s" podCreationTimestamp="2026-02-19 03:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:24.25106805 +0000 UTC m=+189.547738711" watchObservedRunningTime="2026-02-19 03:26:24.253368214 +0000 UTC m=+189.550038825" Feb 19 03:26:24.303043 master-0 kubenswrapper[33867]: I0219 03:26:24.302914 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=6.6865491729999995 podStartE2EDuration="11.302886983s" podCreationTimestamp="2026-02-19 03:26:13 +0000 UTC" firstStartedPulling="2026-02-19 03:26:18.107729907 +0000 UTC m=+183.404400518" lastFinishedPulling="2026-02-19 03:26:22.724067717 +0000 UTC m=+188.020738328" observedRunningTime="2026-02-19 03:26:24.302561134 +0000 UTC m=+189.599231765" watchObservedRunningTime="2026-02-19 03:26:24.302886983 +0000 UTC m=+189.599557594" Feb 19 03:26:24.344229 master-0 kubenswrapper[33867]: I0219 03:26:24.344119 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=11.995452583 podStartE2EDuration="14.344081068s" podCreationTimestamp="2026-02-19 03:26:10 +0000 UTC" firstStartedPulling="2026-02-19 03:26:18.104108416 +0000 UTC m=+183.400779027" lastFinishedPulling="2026-02-19 03:26:20.452736901 +0000 UTC m=+185.749407512" observedRunningTime="2026-02-19 03:26:24.341170417 +0000 UTC m=+189.637841038" watchObservedRunningTime="2026-02-19 03:26:24.344081068 +0000 UTC m=+189.640751689" Feb 19 03:26:25.440791 master-0 kubenswrapper[33867]: I0219 03:26:25.440691 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" podUID="15a3667e-608f-493b-8315-b1358b65b462" containerName="oauth-openshift" containerID="cri-o://f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117" gracePeriod=15 Feb 19 03:26:25.961150 master-0 kubenswrapper[33867]: I0219 03:26:25.961073 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:26:26.008976 master-0 kubenswrapper[33867]: I0219 03:26:26.008864 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:26.009357 master-0 kubenswrapper[33867]: E0219 03:26:26.009332 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a3667e-608f-493b-8315-b1358b65b462" containerName="oauth-openshift" Feb 19 03:26:26.009357 master-0 kubenswrapper[33867]: I0219 03:26:26.009354 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a3667e-608f-493b-8315-b1358b65b462" containerName="oauth-openshift" Feb 19 03:26:26.009678 master-0 kubenswrapper[33867]: I0219 03:26:26.009648 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a3667e-608f-493b-8315-b1358b65b462" containerName="oauth-openshift" Feb 19 03:26:26.010424 master-0 kubenswrapper[33867]: I0219 03:26:26.010396 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.026472 master-0 kubenswrapper[33867]: I0219 03:26:26.018284 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:26.104312 master-0 kubenswrapper[33867]: I0219 03:26:26.104238 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6b6t\" (UniqueName: \"kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104325 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104371 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104405 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104432 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104596 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104670 master-0 kubenswrapper[33867]: I0219 03:26:26.104659 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104930 master-0 kubenswrapper[33867]: I0219 03:26:26.104713 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104930 master-0 kubenswrapper[33867]: I0219 03:26:26.104748 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104930 master-0 kubenswrapper[33867]: I0219 03:26:26.104863 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.104930 master-0 kubenswrapper[33867]: I0219 03:26:26.104906 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.105194 master-0 kubenswrapper[33867]: I0219 03:26:26.104937 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.105194 master-0 kubenswrapper[33867]: I0219 03:26:26.104964 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies\") pod \"15a3667e-608f-493b-8315-b1358b65b462\" (UID: \"15a3667e-608f-493b-8315-b1358b65b462\") " Feb 19 03:26:26.105772 master-0 kubenswrapper[33867]: I0219 03:26:26.105746 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:26.106797 master-0 kubenswrapper[33867]: I0219 03:26:26.106700 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:26.107239 master-0 kubenswrapper[33867]: I0219 03:26:26.106826 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.107350 master-0 kubenswrapper[33867]: I0219 03:26:26.107286 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.107472 master-0 kubenswrapper[33867]: I0219 03:26:26.107164 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:26.107557 master-0 kubenswrapper[33867]: I0219 03:26:26.107397 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.107700 master-0 kubenswrapper[33867]: I0219 03:26:26.107579 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:26.107700 master-0 kubenswrapper[33867]: I0219 03:26:26.107671 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108102 master-0 kubenswrapper[33867]: I0219 03:26:26.107858 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108102 master-0 kubenswrapper[33867]: I0219 03:26:26.107983 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108102 master-0 kubenswrapper[33867]: I0219 03:26:26.108068 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxqd\" (UniqueName: \"kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108499 master-0 kubenswrapper[33867]: I0219 03:26:26.108150 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108499 master-0 kubenswrapper[33867]: I0219 03:26:26.108342 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108499 master-0 kubenswrapper[33867]: I0219 03:26:26.108386 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108499 master-0 kubenswrapper[33867]: I0219 03:26:26.108478 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108637 master-0 kubenswrapper[33867]: I0219 03:26:26.108573 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.108637 master-0 kubenswrapper[33867]: I0219 03:26:26.108605 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.109716 master-0 kubenswrapper[33867]: I0219 03:26:26.108809 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.109716 master-0 kubenswrapper[33867]: I0219 03:26:26.108845 33867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15a3667e-608f-493b-8315-b1358b65b462-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.109716 master-0 kubenswrapper[33867]: I0219 03:26:26.108872 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.109716 master-0 kubenswrapper[33867]: I0219 03:26:26.108895 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.109716 master-0 kubenswrapper[33867]: I0219 03:26:26.109585 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:26.110405 master-0 kubenswrapper[33867]: I0219 03:26:26.110363 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.111126 master-0 kubenswrapper[33867]: I0219 03:26:26.111055 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.111195 master-0 kubenswrapper[33867]: I0219 03:26:26.111114 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t" (OuterVolumeSpecName: "kube-api-access-b6b6t") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "kube-api-access-b6b6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:26.111195 master-0 kubenswrapper[33867]: I0219 03:26:26.111142 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.111284 master-0 kubenswrapper[33867]: I0219 03:26:26.111180 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.111440 master-0 kubenswrapper[33867]: I0219 03:26:26.111398 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.114201 master-0 kubenswrapper[33867]: I0219 03:26:26.114166 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.118448 master-0 kubenswrapper[33867]: I0219 03:26:26.118380 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "15a3667e-608f-493b-8315-b1358b65b462" (UID: "15a3667e-608f-493b-8315-b1358b65b462"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:26.211085 master-0 kubenswrapper[33867]: I0219 03:26:26.210908 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.211085 master-0 kubenswrapper[33867]: I0219 03:26:26.210978 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.211498 master-0 kubenswrapper[33867]: I0219 03:26:26.211446 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.212689 master-0 kubenswrapper[33867]: I0219 03:26:26.212320 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.212689 master-0 kubenswrapper[33867]: I0219 03:26:26.212377 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.212689 master-0 kubenswrapper[33867]: I0219 03:26:26.212464 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.212927 master-0 kubenswrapper[33867]: I0219 03:26:26.212447 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.212974 master-0 kubenswrapper[33867]: I0219 03:26:26.212938 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213010 master-0 kubenswrapper[33867]: I0219 03:26:26.212977 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213010 master-0 kubenswrapper[33867]: I0219 03:26:26.212990 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213083 master-0 kubenswrapper[33867]: I0219 03:26:26.213009 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213219 master-0 kubenswrapper[33867]: I0219 03:26:26.213191 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213390 master-0 kubenswrapper[33867]: I0219 03:26:26.213349 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213448 master-0 kubenswrapper[33867]: I0219 03:26:26.213393 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213448 master-0 kubenswrapper[33867]: I0219 03:26:26.213420 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxqd\" (UniqueName: \"kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213517 master-0 kubenswrapper[33867]: I0219 03:26:26.213489 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.213661 master-0 kubenswrapper[33867]: I0219 03:26:26.213627 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6b6t\" (UniqueName: \"kubernetes.io/projected/15a3667e-608f-493b-8315-b1358b65b462-kube-api-access-b6b6t\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.213703 master-0 kubenswrapper[33867]: I0219 03:26:26.213660 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.213703 master-0 kubenswrapper[33867]: I0219 03:26:26.213679 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.213703 master-0 kubenswrapper[33867]: I0219 03:26:26.213697 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214632 master-0 kubenswrapper[33867]: I0219 03:26:26.214584 33867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15a3667e-608f-493b-8315-b1358b65b462-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214632 master-0 kubenswrapper[33867]: I0219 03:26:26.214624 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214793 master-0 kubenswrapper[33867]: I0219 03:26:26.214643 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214793 master-0 kubenswrapper[33867]: I0219 03:26:26.214659 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214793 master-0 kubenswrapper[33867]: I0219 03:26:26.214674 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/15a3667e-608f-493b-8315-b1358b65b462-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:26.214915 master-0 kubenswrapper[33867]: I0219 03:26:26.214864 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.215292 master-0 kubenswrapper[33867]: I0219 03:26:26.215224 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.216116 master-0 kubenswrapper[33867]: I0219 03:26:26.216082 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.217328 master-0 kubenswrapper[33867]: I0219 03:26:26.217275 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.217402 master-0 kubenswrapper[33867]: I0219 03:26:26.217310 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.218521 master-0 kubenswrapper[33867]: I0219 03:26:26.218472 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.218745 master-0 kubenswrapper[33867]: I0219 03:26:26.218712 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.219746 master-0 kubenswrapper[33867]: I0219 03:26:26.219599 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.219746 master-0 kubenswrapper[33867]: I0219 03:26:26.219624 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.237338 master-0 kubenswrapper[33867]: I0219 03:26:26.236430 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxqd\" (UniqueName: \"kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd\") pod \"oauth-openshift-55d5bff6-v7lq6\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.259201 master-0 kubenswrapper[33867]: I0219 03:26:26.259136 33867 generic.go:334] "Generic (PLEG): container finished" podID="15a3667e-608f-493b-8315-b1358b65b462" containerID="f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117" exitCode=0 Feb 19 03:26:26.259736 master-0 kubenswrapper[33867]: I0219 03:26:26.259247 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" Feb 19 03:26:26.259962 master-0 kubenswrapper[33867]: I0219 03:26:26.259284 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" event={"ID":"15a3667e-608f-493b-8315-b1358b65b462","Type":"ContainerDied","Data":"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117"} Feb 19 03:26:26.260022 master-0 kubenswrapper[33867]: I0219 03:26:26.259990 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f58cc6f64-dchzh" event={"ID":"15a3667e-608f-493b-8315-b1358b65b462","Type":"ContainerDied","Data":"5acf693df00afe95996b30a5b0da4d673657acd415a117cc3d939228c657ac05"} Feb 19 03:26:26.260083 master-0 kubenswrapper[33867]: I0219 03:26:26.260025 33867 scope.go:117] "RemoveContainer" containerID="f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117" Feb 19 03:26:26.286808 master-0 kubenswrapper[33867]: I0219 03:26:26.286696 33867 scope.go:117] "RemoveContainer" containerID="f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117" Feb 19 03:26:26.287507 master-0 kubenswrapper[33867]: E0219 03:26:26.287458 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117\": container with ID starting with f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117 not found: ID does not exist" containerID="f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117" Feb 19 03:26:26.287585 master-0 kubenswrapper[33867]: I0219 03:26:26.287509 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117"} err="failed to get container status \"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117\": rpc error: code = NotFound desc = could not find container \"f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117\": container with ID starting with f3f8da5eeb92f438dac8f62747feb2463c632a73e09a393d26de7a877c2db117 not found: ID does not exist" Feb 19 03:26:26.304045 master-0 kubenswrapper[33867]: I0219 03:26:26.303770 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:26:26.312006 master-0 kubenswrapper[33867]: I0219 03:26:26.311909 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6f58cc6f64-dchzh"] Feb 19 03:26:26.359784 master-0 kubenswrapper[33867]: I0219 03:26:26.359700 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:26.822968 master-0 kubenswrapper[33867]: I0219 03:26:26.822897 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:26.840972 master-0 kubenswrapper[33867]: W0219 03:26:26.840901 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6 WatchSource:0}: Error finding container ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6: Status 404 returned error can't find the container with id ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6 Feb 19 03:26:26.971594 master-0 kubenswrapper[33867]: I0219 03:26:26.971218 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a3667e-608f-493b-8315-b1358b65b462" path="/var/lib/kubelet/pods/15a3667e-608f-493b-8315-b1358b65b462/volumes" Feb 19 03:26:27.284511 master-0 kubenswrapper[33867]: I0219 03:26:27.281671 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" event={"ID":"f100341b-d0b3-4c39-825a-f0809140ea2f","Type":"ContainerStarted","Data":"eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981"} Feb 19 03:26:27.284511 master-0 kubenswrapper[33867]: I0219 03:26:27.281758 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" event={"ID":"f100341b-d0b3-4c39-825a-f0809140ea2f","Type":"ContainerStarted","Data":"ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6"} Feb 19 03:26:27.322842 master-0 kubenswrapper[33867]: I0219 03:26:27.319107 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" podStartSLOduration=10.319079308 podStartE2EDuration="10.319079308s" podCreationTimestamp="2026-02-19 03:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:27.317515684 +0000 UTC m=+192.614186305" watchObservedRunningTime="2026-02-19 03:26:27.319079308 +0000 UTC m=+192.615749919" Feb 19 03:26:28.290528 master-0 kubenswrapper[33867]: I0219 03:26:28.290462 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:28.298016 master-0 kubenswrapper[33867]: I0219 03:26:28.297948 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:29.121388 master-0 kubenswrapper[33867]: I0219 03:26:29.119518 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:26:30.929288 master-0 kubenswrapper[33867]: I0219 03:26:30.929171 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:31.018015 master-0 kubenswrapper[33867]: I0219 03:26:31.016474 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 19 03:26:31.103890 master-0 kubenswrapper[33867]: I0219 03:26:31.103831 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:26:31.104211 master-0 kubenswrapper[33867]: I0219 03:26:31.103911 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:26:31.671228 master-0 kubenswrapper[33867]: I0219 03:26:31.671155 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:31.671228 master-0 kubenswrapper[33867]: I0219 03:26:31.671229 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:26:31.673216 master-0 kubenswrapper[33867]: I0219 03:26:31.672930 33867 patch_prober.go:28] interesting pod/console-6b9ffbb744-xzn8r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Feb 19 03:26:31.673335 master-0 kubenswrapper[33867]: I0219 03:26:31.673292 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Feb 19 03:26:36.730733 master-0 kubenswrapper[33867]: I0219 03:26:36.730637 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:26:36.731695 master-0 kubenswrapper[33867]: I0219 03:26:36.731658 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" containerName="installer" containerID="cri-o://01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca" gracePeriod=30 Feb 19 03:26:41.104649 master-0 kubenswrapper[33867]: I0219 03:26:41.104576 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:26:41.105385 master-0 kubenswrapper[33867]: I0219 03:26:41.104650 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:26:41.322834 master-0 kubenswrapper[33867]: I0219 03:26:41.322775 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:26:41.323664 master-0 kubenswrapper[33867]: I0219 03:26:41.323645 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.398702 master-0 kubenswrapper[33867]: I0219 03:26:41.398642 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:26:41.425720 master-0 kubenswrapper[33867]: I0219 03:26:41.425614 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.425720 master-0 kubenswrapper[33867]: I0219 03:26:41.425707 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.426294 master-0 kubenswrapper[33867]: I0219 03:26:41.425855 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.528586 master-0 kubenswrapper[33867]: I0219 03:26:41.528485 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.529793 master-0 kubenswrapper[33867]: I0219 03:26:41.528630 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.529793 master-0 kubenswrapper[33867]: I0219 03:26:41.528694 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.529793 master-0 kubenswrapper[33867]: I0219 03:26:41.528840 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.529793 master-0 kubenswrapper[33867]: I0219 03:26:41.528832 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.547207 master-0 kubenswrapper[33867]: I0219 03:26:41.547160 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access\") pod \"installer-6-master-0\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.649342 master-0 kubenswrapper[33867]: I0219 03:26:41.649151 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:26:41.672164 master-0 kubenswrapper[33867]: I0219 03:26:41.672094 33867 patch_prober.go:28] interesting pod/console-6b9ffbb744-xzn8r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Feb 19 03:26:41.672350 master-0 kubenswrapper[33867]: I0219 03:26:41.672191 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Feb 19 03:26:45.340021 master-0 kubenswrapper[33867]: I0219 03:26:45.339929 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:26:45.456538 master-0 kubenswrapper[33867]: I0219 03:26:45.456460 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"8e065576-b204-4579-8c25-1bb46cc88738","Type":"ContainerStarted","Data":"690f64c14bf6a1ebdb73f0846930429e010aef3aae20dd58c6c75d7ef87420df"} Feb 19 03:26:46.365889 master-0 kubenswrapper[33867]: I0219 03:26:46.365681 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-74cd99cf84-cpf69" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerName="console" containerID="cri-o://6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e" gracePeriod=15 Feb 19 03:26:46.469450 master-0 kubenswrapper[33867]: I0219 03:26:46.469379 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"8e065576-b204-4579-8c25-1bb46cc88738","Type":"ContainerStarted","Data":"b0dcb1ce3f83cdc7e987fd3620c293c92a96041a404cf4029410b007b2c1b26d"} Feb 19 03:26:46.565628 master-0 kubenswrapper[33867]: I0219 03:26:46.564480 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=5.564432642 podStartE2EDuration="5.564432642s" podCreationTimestamp="2026-02-19 03:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:46.562832107 +0000 UTC m=+211.859502718" watchObservedRunningTime="2026-02-19 03:26:46.564432642 +0000 UTC m=+211.861103253" Feb 19 03:26:46.936697 master-0 kubenswrapper[33867]: I0219 03:26:46.936608 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74cd99cf84-cpf69_89199d30-e6ec-4748-80d2-9edaf1b3dfc9/console/0.log" Feb 19 03:26:46.937146 master-0 kubenswrapper[33867]: I0219 03:26:46.936732 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:46.977473 master-0 kubenswrapper[33867]: I0219 03:26:46.977404 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.977473 master-0 kubenswrapper[33867]: I0219 03:26:46.977485 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.977923 master-0 kubenswrapper[33867]: I0219 03:26:46.977525 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.977923 master-0 kubenswrapper[33867]: I0219 03:26:46.977595 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.977923 master-0 kubenswrapper[33867]: I0219 03:26:46.977669 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.977923 master-0 kubenswrapper[33867]: I0219 03:26:46.977815 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hr9v\" (UniqueName: \"kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v\") pod \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\" (UID: \"89199d30-e6ec-4748-80d2-9edaf1b3dfc9\") " Feb 19 03:26:46.984658 master-0 kubenswrapper[33867]: I0219 03:26:46.984547 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:46.985836 master-0 kubenswrapper[33867]: I0219 03:26:46.985782 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:46.986868 master-0 kubenswrapper[33867]: I0219 03:26:46.986828 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config" (OuterVolumeSpecName: "console-config") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:46.995002 master-0 kubenswrapper[33867]: I0219 03:26:46.993997 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca" (OuterVolumeSpecName: "service-ca") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:46.997848 master-0 kubenswrapper[33867]: I0219 03:26:46.997782 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:46.998758 master-0 kubenswrapper[33867]: I0219 03:26:46.998656 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v" (OuterVolumeSpecName: "kube-api-access-7hr9v") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "kube-api-access-7hr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:46.999752 master-0 kubenswrapper[33867]: I0219 03:26:46.999706 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "89199d30-e6ec-4748-80d2-9edaf1b3dfc9" (UID: "89199d30-e6ec-4748-80d2-9edaf1b3dfc9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:47.087493 master-0 kubenswrapper[33867]: I0219 03:26:47.087404 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hr9v\" (UniqueName: \"kubernetes.io/projected/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-kube-api-access-7hr9v\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:47.087493 master-0 kubenswrapper[33867]: I0219 03:26:47.087457 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:47.087493 master-0 kubenswrapper[33867]: I0219 03:26:47.087470 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:47.087493 master-0 kubenswrapper[33867]: I0219 03:26:47.087482 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:47.087493 master-0 kubenswrapper[33867]: I0219 03:26:47.087493 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89199d30-e6ec-4748-80d2-9edaf1b3dfc9-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:47.480432 master-0 kubenswrapper[33867]: I0219 03:26:47.480339 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74cd99cf84-cpf69_89199d30-e6ec-4748-80d2-9edaf1b3dfc9/console/0.log" Feb 19 03:26:47.480432 master-0 kubenswrapper[33867]: I0219 03:26:47.480421 33867 generic.go:334] "Generic (PLEG): container finished" podID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerID="6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e" exitCode=2 Feb 19 03:26:47.481366 master-0 kubenswrapper[33867]: I0219 03:26:47.480519 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74cd99cf84-cpf69" event={"ID":"89199d30-e6ec-4748-80d2-9edaf1b3dfc9","Type":"ContainerDied","Data":"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e"} Feb 19 03:26:47.481366 master-0 kubenswrapper[33867]: I0219 03:26:47.480550 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74cd99cf84-cpf69" Feb 19 03:26:47.481366 master-0 kubenswrapper[33867]: I0219 03:26:47.480610 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74cd99cf84-cpf69" event={"ID":"89199d30-e6ec-4748-80d2-9edaf1b3dfc9","Type":"ContainerDied","Data":"dbcbfe4c8cf4477f3e3755e5c50e43f5c7c4102882f492a6d47199930140b3e6"} Feb 19 03:26:47.481366 master-0 kubenswrapper[33867]: I0219 03:26:47.480661 33867 scope.go:117] "RemoveContainer" containerID="6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e" Feb 19 03:26:47.483870 master-0 kubenswrapper[33867]: I0219 03:26:47.483787 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-bdf7d" event={"ID":"6505205d-23d4-4c99-83ac-e82d298a2805","Type":"ContainerStarted","Data":"ae369a54c6a3bfd0411f1ad9f715a7ba403041d3b51fcd2fc24616fd98c5d71b"} Feb 19 03:26:47.484367 master-0 kubenswrapper[33867]: I0219 03:26:47.484335 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:26:47.486662 master-0 kubenswrapper[33867]: I0219 03:26:47.486617 33867 patch_prober.go:28] interesting pod/downloads-955b69498-bdf7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Feb 19 03:26:47.486737 master-0 kubenswrapper[33867]: I0219 03:26:47.486681 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-bdf7d" podUID="6505205d-23d4-4c99-83ac-e82d298a2805" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Feb 19 03:26:47.504335 master-0 kubenswrapper[33867]: I0219 03:26:47.504280 33867 scope.go:117] "RemoveContainer" containerID="6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e" Feb 19 03:26:47.505570 master-0 kubenswrapper[33867]: E0219 03:26:47.505524 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e\": container with ID starting with 6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e not found: ID does not exist" containerID="6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e" Feb 19 03:26:47.509093 master-0 kubenswrapper[33867]: I0219 03:26:47.505594 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e"} err="failed to get container status \"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e\": rpc error: code = NotFound desc = could not find container \"6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e\": container with ID starting with 6518440e3f9b00d83427eba68b353ed8e0d657c3e4c1cdc9db96853b12e7da2e not found: ID does not exist" Feb 19 03:26:47.510600 master-0 kubenswrapper[33867]: I0219 03:26:47.510527 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-bdf7d" podStartSLOduration=2.740006626 podStartE2EDuration="48.510500074s" podCreationTimestamp="2026-02-19 03:25:59 +0000 UTC" firstStartedPulling="2026-02-19 03:26:00.954992121 +0000 UTC m=+166.251662742" lastFinishedPulling="2026-02-19 03:26:46.725485579 +0000 UTC m=+212.022156190" observedRunningTime="2026-02-19 03:26:47.50821466 +0000 UTC m=+212.804885281" watchObservedRunningTime="2026-02-19 03:26:47.510500074 +0000 UTC m=+212.807170685" Feb 19 03:26:47.532206 master-0 kubenswrapper[33867]: I0219 03:26:47.532114 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:47.539952 master-0 kubenswrapper[33867]: I0219 03:26:47.539889 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-74cd99cf84-cpf69"] Feb 19 03:26:48.494590 master-0 kubenswrapper[33867]: I0219 03:26:48.494491 33867 patch_prober.go:28] interesting pod/downloads-955b69498-bdf7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Feb 19 03:26:48.495527 master-0 kubenswrapper[33867]: I0219 03:26:48.494628 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-bdf7d" podUID="6505205d-23d4-4c99-83ac-e82d298a2805" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Feb 19 03:26:48.968834 master-0 kubenswrapper[33867]: I0219 03:26:48.968735 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" path="/var/lib/kubelet/pods/89199d30-e6ec-4748-80d2-9edaf1b3dfc9/volumes" Feb 19 03:26:50.128723 master-0 kubenswrapper[33867]: I0219 03:26:50.128641 33867 patch_prober.go:28] interesting pod/downloads-955b69498-bdf7d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Feb 19 03:26:50.129469 master-0 kubenswrapper[33867]: I0219 03:26:50.128752 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-955b69498-bdf7d" podUID="6505205d-23d4-4c99-83ac-e82d298a2805" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Feb 19 03:26:50.129469 master-0 kubenswrapper[33867]: I0219 03:26:50.128839 33867 patch_prober.go:28] interesting pod/downloads-955b69498-bdf7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Feb 19 03:26:50.129469 master-0 kubenswrapper[33867]: I0219 03:26:50.128940 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-bdf7d" podUID="6505205d-23d4-4c99-83ac-e82d298a2805" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Feb 19 03:26:51.104092 master-0 kubenswrapper[33867]: I0219 03:26:51.103991 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:26:51.104092 master-0 kubenswrapper[33867]: I0219 03:26:51.104088 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:26:51.672172 master-0 kubenswrapper[33867]: I0219 03:26:51.672090 33867 patch_prober.go:28] interesting pod/console-6b9ffbb744-xzn8r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Feb 19 03:26:51.673345 master-0 kubenswrapper[33867]: I0219 03:26:51.672203 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Feb 19 03:26:56.350641 master-0 kubenswrapper[33867]: I0219 03:26:56.350510 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerName="oauth-openshift" containerID="cri-o://eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981" gracePeriod=15 Feb 19 03:26:56.360913 master-0 kubenswrapper[33867]: I0219 03:26:56.360848 33867 patch_prober.go:28] interesting pod/oauth-openshift-55d5bff6-v7lq6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.106:6443/healthz\": dial tcp 10.128.0.106:6443: connect: connection refused" start-of-body= Feb 19 03:26:56.361046 master-0 kubenswrapper[33867]: I0219 03:26:56.360936 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.106:6443/healthz\": dial tcp 10.128.0.106:6443: connect: connection refused" Feb 19 03:26:56.567897 master-0 kubenswrapper[33867]: I0219 03:26:56.567819 33867 generic.go:334] "Generic (PLEG): container finished" podID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerID="eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981" exitCode=0 Feb 19 03:26:56.567897 master-0 kubenswrapper[33867]: I0219 03:26:56.567879 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" event={"ID":"f100341b-d0b3-4c39-825a-f0809140ea2f","Type":"ContainerDied","Data":"eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981"} Feb 19 03:26:56.837937 master-0 kubenswrapper[33867]: I0219 03:26:56.837854 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:56.879627 master-0 kubenswrapper[33867]: I0219 03:26:56.879500 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.879627 master-0 kubenswrapper[33867]: I0219 03:26:56.879565 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.879627 master-0 kubenswrapper[33867]: I0219 03:26:56.879595 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.879627 master-0 kubenswrapper[33867]: I0219 03:26:56.879629 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879690 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879714 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879752 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879784 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879835 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879889 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880193 master-0 kubenswrapper[33867]: I0219 03:26:56.879930 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjxqd\" (UniqueName: \"kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880468 master-0 kubenswrapper[33867]: I0219 03:26:56.880342 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880468 master-0 kubenswrapper[33867]: I0219 03:26:56.880373 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert\") pod \"f100341b-d0b3-4c39-825a-f0809140ea2f\" (UID: \"f100341b-d0b3-4c39-825a-f0809140ea2f\") " Feb 19 03:26:56.880938 master-0 kubenswrapper[33867]: I0219 03:26:56.880753 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:56.881062 master-0 kubenswrapper[33867]: I0219 03:26:56.881023 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.881133 master-0 kubenswrapper[33867]: I0219 03:26:56.881100 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:56.881688 master-0 kubenswrapper[33867]: I0219 03:26:56.881647 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:56.882587 master-0 kubenswrapper[33867]: I0219 03:26:56.882129 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:56.882587 master-0 kubenswrapper[33867]: I0219 03:26:56.882538 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:26:56.888957 master-0 kubenswrapper[33867]: I0219 03:26:56.888864 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd" (OuterVolumeSpecName: "kube-api-access-kjxqd") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "kube-api-access-kjxqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:56.890697 master-0 kubenswrapper[33867]: I0219 03:26:56.890575 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.890786 master-0 kubenswrapper[33867]: I0219 03:26:56.890679 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.890861 master-0 kubenswrapper[33867]: I0219 03:26:56.890816 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.890981 master-0 kubenswrapper[33867]: I0219 03:26:56.890957 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.891039 master-0 kubenswrapper[33867]: I0219 03:26:56.890989 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.891272 master-0 kubenswrapper[33867]: I0219 03:26:56.891207 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.893290 master-0 kubenswrapper[33867]: I0219 03:26:56.893220 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f100341b-d0b3-4c39-825a-f0809140ea2f" (UID: "f100341b-d0b3-4c39-825a-f0809140ea2f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:26:56.896387 master-0 kubenswrapper[33867]: I0219 03:26:56.896326 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-cc89c88f8-mm225"] Feb 19 03:26:56.896892 master-0 kubenswrapper[33867]: E0219 03:26:56.896828 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerName="console" Feb 19 03:26:56.896892 master-0 kubenswrapper[33867]: I0219 03:26:56.896852 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerName="console" Feb 19 03:26:56.896892 master-0 kubenswrapper[33867]: E0219 03:26:56.896870 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerName="oauth-openshift" Feb 19 03:26:56.896892 master-0 kubenswrapper[33867]: I0219 03:26:56.896880 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerName="oauth-openshift" Feb 19 03:26:56.897386 master-0 kubenswrapper[33867]: I0219 03:26:56.897357 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89199d30-e6ec-4748-80d2-9edaf1b3dfc9" containerName="console" Feb 19 03:26:56.897478 master-0 kubenswrapper[33867]: I0219 03:26:56.897399 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" containerName="oauth-openshift" Feb 19 03:26:56.898468 master-0 kubenswrapper[33867]: I0219 03:26:56.898379 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.915364 master-0 kubenswrapper[33867]: I0219 03:26:56.915303 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc89c88f8-mm225"] Feb 19 03:26:56.982653 master-0 kubenswrapper[33867]: I0219 03:26:56.982620 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsb9\" (UniqueName: \"kubernetes.io/projected/ba929f18-b86c-4404-9448-cabb59ddc4cc-kube-api-access-vbsb9\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.982822 master-0 kubenswrapper[33867]: I0219 03:26:56.982792 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.982908 master-0 kubenswrapper[33867]: I0219 03:26:56.982896 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-dir\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983003 master-0 kubenswrapper[33867]: I0219 03:26:56.982991 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983093 master-0 kubenswrapper[33867]: I0219 03:26:56.983080 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983170 master-0 kubenswrapper[33867]: I0219 03:26:56.983158 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983277 master-0 kubenswrapper[33867]: I0219 03:26:56.983249 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983369 master-0 kubenswrapper[33867]: I0219 03:26:56.983358 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-policies\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983454 master-0 kubenswrapper[33867]: I0219 03:26:56.983441 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983539 master-0 kubenswrapper[33867]: I0219 03:26:56.983528 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983623 master-0 kubenswrapper[33867]: I0219 03:26:56.983612 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-session\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983697 master-0 kubenswrapper[33867]: I0219 03:26:56.983685 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983810 master-0 kubenswrapper[33867]: I0219 03:26:56.983796 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:56.983989 master-0 kubenswrapper[33867]: I0219 03:26:56.983975 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984069 master-0 kubenswrapper[33867]: I0219 03:26:56.984059 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984134 master-0 kubenswrapper[33867]: I0219 03:26:56.984124 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984209 master-0 kubenswrapper[33867]: I0219 03:26:56.984199 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984302 master-0 kubenswrapper[33867]: I0219 03:26:56.984291 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984375 master-0 kubenswrapper[33867]: I0219 03:26:56.984363 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984454 master-0 kubenswrapper[33867]: I0219 03:26:56.984443 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984510 master-0 kubenswrapper[33867]: I0219 03:26:56.984500 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984585 master-0 kubenswrapper[33867]: I0219 03:26:56.984575 33867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984643 master-0 kubenswrapper[33867]: I0219 03:26:56.984633 33867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f100341b-d0b3-4c39-825a-f0809140ea2f-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984714 master-0 kubenswrapper[33867]: I0219 03:26:56.984704 33867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f100341b-d0b3-4c39-825a-f0809140ea2f-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:56.984770 master-0 kubenswrapper[33867]: I0219 03:26:56.984761 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjxqd\" (UniqueName: \"kubernetes.io/projected/f100341b-d0b3-4c39-825a-f0809140ea2f-kube-api-access-kjxqd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:57.086689 master-0 kubenswrapper[33867]: I0219 03:26:57.086518 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-policies\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.086689 master-0 kubenswrapper[33867]: I0219 03:26:57.086597 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.086689 master-0 kubenswrapper[33867]: I0219 03:26:57.086639 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.086689 master-0 kubenswrapper[33867]: I0219 03:26:57.086663 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-session\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.086689 master-0 kubenswrapper[33867]: I0219 03:26:57.086679 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088075 master-0 kubenswrapper[33867]: I0219 03:26:57.087986 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088235 master-0 kubenswrapper[33867]: I0219 03:26:57.088201 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsb9\" (UniqueName: \"kubernetes.io/projected/ba929f18-b86c-4404-9448-cabb59ddc4cc-kube-api-access-vbsb9\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088354 master-0 kubenswrapper[33867]: I0219 03:26:57.088321 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088405 master-0 kubenswrapper[33867]: I0219 03:26:57.088376 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-dir\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088475 master-0 kubenswrapper[33867]: I0219 03:26:57.088452 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088679 master-0 kubenswrapper[33867]: I0219 03:26:57.088636 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-dir\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.088750 master-0 kubenswrapper[33867]: I0219 03:26:57.088722 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.089068 master-0 kubenswrapper[33867]: I0219 03:26:57.089008 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.089144 master-0 kubenswrapper[33867]: I0219 03:26:57.089104 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.089474 master-0 kubenswrapper[33867]: I0219 03:26:57.089446 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-audit-policies\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.089715 master-0 kubenswrapper[33867]: I0219 03:26:57.089688 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.090012 master-0 kubenswrapper[33867]: I0219 03:26:57.089963 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.090100 master-0 kubenswrapper[33867]: I0219 03:26:57.090069 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.090891 master-0 kubenswrapper[33867]: I0219 03:26:57.090849 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.091590 master-0 kubenswrapper[33867]: I0219 03:26:57.091548 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-session\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.091757 master-0 kubenswrapper[33867]: I0219 03:26:57.091731 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.091828 master-0 kubenswrapper[33867]: I0219 03:26:57.091770 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.091991 master-0 kubenswrapper[33867]: I0219 03:26:57.091943 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.093219 master-0 kubenswrapper[33867]: I0219 03:26:57.093159 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.093927 master-0 kubenswrapper[33867]: I0219 03:26:57.093892 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba929f18-b86c-4404-9448-cabb59ddc4cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.107111 master-0 kubenswrapper[33867]: I0219 03:26:57.107071 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsb9\" (UniqueName: \"kubernetes.io/projected/ba929f18-b86c-4404-9448-cabb59ddc4cc-kube-api-access-vbsb9\") pod \"oauth-openshift-cc89c88f8-mm225\" (UID: \"ba929f18-b86c-4404-9448-cabb59ddc4cc\") " pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.269281 master-0 kubenswrapper[33867]: I0219 03:26:57.269187 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:57.578978 master-0 kubenswrapper[33867]: I0219 03:26:57.578905 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" event={"ID":"f100341b-d0b3-4c39-825a-f0809140ea2f","Type":"ContainerDied","Data":"ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6"} Feb 19 03:26:57.578978 master-0 kubenswrapper[33867]: I0219 03:26:57.578990 33867 scope.go:117] "RemoveContainer" containerID="eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981" Feb 19 03:26:57.580124 master-0 kubenswrapper[33867]: I0219 03:26:57.579303 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55d5bff6-v7lq6" Feb 19 03:26:57.604335 master-0 kubenswrapper[33867]: I0219 03:26:57.604006 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:57.614874 master-0 kubenswrapper[33867]: I0219 03:26:57.614809 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-55d5bff6-v7lq6"] Feb 19 03:26:57.671706 master-0 kubenswrapper[33867]: I0219 03:26:57.671612 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc89c88f8-mm225"] Feb 19 03:26:58.415146 master-0 kubenswrapper[33867]: E0219 03:26:58.415052 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-conmon-eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod17fbcb8d_b3b4_4d0b_bf13_1c2fdd78e212.slice/crio-01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod17fbcb8d_b3b4_4d0b_bf13_1c2fdd78e212.slice/crio-conmon-01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:26:58.415580 master-0 kubenswrapper[33867]: E0219 03:26:58.415174 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod17fbcb8d_b3b4_4d0b_bf13_1c2fdd78e212.slice/crio-01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod17fbcb8d_b3b4_4d0b_bf13_1c2fdd78e212.slice/crio-conmon-01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-ae9e789495cc0b710e28a9bbdf163ccc676b5e8d90c3cb81d19fe079892bd6a6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100341b_d0b3_4c39_825a_f0809140ea2f.slice/crio-conmon-eb0c42ad39911a0ebf2220c2357c042709a6072020941a6451955d9968717981.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:26:58.416033 master-0 kubenswrapper[33867]: E0219 03:26:58.415977 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:26:58.588457 master-0 kubenswrapper[33867]: I0219 03:26:58.588218 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" event={"ID":"ba929f18-b86c-4404-9448-cabb59ddc4cc","Type":"ContainerStarted","Data":"a3d160893f6246aab55708c59919cc540a1da4ee1570eb47fc298daab6f629d7"} Feb 19 03:26:58.588457 master-0 kubenswrapper[33867]: I0219 03:26:58.588289 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" event={"ID":"ba929f18-b86c-4404-9448-cabb59ddc4cc","Type":"ContainerStarted","Data":"ff8c9acd8ff0c2a3a5cca7972fb1b2dec4d0be85562ef270a8b73489658096a6"} Feb 19 03:26:58.589492 master-0 kubenswrapper[33867]: I0219 03:26:58.589426 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:58.596008 master-0 kubenswrapper[33867]: I0219 03:26:58.593178 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212/installer/0.log" Feb 19 03:26:58.596008 master-0 kubenswrapper[33867]: I0219 03:26:58.593225 33867 generic.go:334] "Generic (PLEG): container finished" podID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" containerID="01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca" exitCode=1 Feb 19 03:26:58.596008 master-0 kubenswrapper[33867]: I0219 03:26:58.593295 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212","Type":"ContainerDied","Data":"01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca"} Feb 19 03:26:58.596008 master-0 kubenswrapper[33867]: I0219 03:26:58.595049 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" Feb 19 03:26:58.621039 master-0 kubenswrapper[33867]: I0219 03:26:58.620960 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cc89c88f8-mm225" podStartSLOduration=28.620936732 podStartE2EDuration="28.620936732s" podCreationTimestamp="2026-02-19 03:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:26:58.614381888 +0000 UTC m=+223.911052499" watchObservedRunningTime="2026-02-19 03:26:58.620936732 +0000 UTC m=+223.917607343" Feb 19 03:26:58.783326 master-0 kubenswrapper[33867]: I0219 03:26:58.783276 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212/installer/0.log" Feb 19 03:26:58.783550 master-0 kubenswrapper[33867]: I0219 03:26:58.783374 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:26:58.932954 master-0 kubenswrapper[33867]: I0219 03:26:58.932895 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access\") pod \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " Feb 19 03:26:58.933484 master-0 kubenswrapper[33867]: I0219 03:26:58.933454 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock\") pod \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " Feb 19 03:26:58.933682 master-0 kubenswrapper[33867]: I0219 03:26:58.933626 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock" (OuterVolumeSpecName: "var-lock") pod "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" (UID: "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:58.933865 master-0 kubenswrapper[33867]: I0219 03:26:58.933839 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir\") pod \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\" (UID: \"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212\") " Feb 19 03:26:58.934006 master-0 kubenswrapper[33867]: I0219 03:26:58.933975 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" (UID: "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:26:58.934405 master-0 kubenswrapper[33867]: I0219 03:26:58.934386 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:58.934506 master-0 kubenswrapper[33867]: I0219 03:26:58.934491 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:58.936015 master-0 kubenswrapper[33867]: I0219 03:26:58.935935 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" (UID: "17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:26:58.968336 master-0 kubenswrapper[33867]: I0219 03:26:58.968144 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f100341b-d0b3-4c39-825a-f0809140ea2f" path="/var/lib/kubelet/pods/f100341b-d0b3-4c39-825a-f0809140ea2f/volumes" Feb 19 03:26:59.036101 master-0 kubenswrapper[33867]: I0219 03:26:59.036008 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:26:59.604093 master-0 kubenswrapper[33867]: I0219 03:26:59.604007 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212/installer/0.log" Feb 19 03:26:59.605018 master-0 kubenswrapper[33867]: I0219 03:26:59.604175 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212","Type":"ContainerDied","Data":"9d547a193ae77e4df446f997fc168a64a7acd13e67758d515525ea4214178214"} Feb 19 03:26:59.605018 master-0 kubenswrapper[33867]: I0219 03:26:59.604312 33867 scope.go:117] "RemoveContainer" containerID="01e081145a81d0517b2b4107d7aa20c20e0006874c27f3f32d55fdb78573efca" Feb 19 03:26:59.605018 master-0 kubenswrapper[33867]: I0219 03:26:59.604212 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 19 03:27:00.141275 master-0 kubenswrapper[33867]: I0219 03:27:00.141186 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-bdf7d" Feb 19 03:27:00.472392 master-0 kubenswrapper[33867]: I0219 03:27:00.471786 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:27:00.851112 master-0 kubenswrapper[33867]: I0219 03:27:00.850884 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 19 03:27:00.966877 master-0 kubenswrapper[33867]: I0219 03:27:00.966792 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" path="/var/lib/kubelet/pods/17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212/volumes" Feb 19 03:27:01.103473 master-0 kubenswrapper[33867]: I0219 03:27:01.103324 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:27:01.103873 master-0 kubenswrapper[33867]: I0219 03:27:01.103823 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:27:01.672289 master-0 kubenswrapper[33867]: I0219 03:27:01.672150 33867 patch_prober.go:28] interesting pod/console-6b9ffbb744-xzn8r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Feb 19 03:27:01.672289 master-0 kubenswrapper[33867]: I0219 03:27:01.672248 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Feb 19 03:27:01.820799 master-0 kubenswrapper[33867]: I0219 03:27:01.820725 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:27:01.821172 master-0 kubenswrapper[33867]: I0219 03:27:01.820968 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-6-master-0" podUID="8e065576-b204-4579-8c25-1bb46cc88738" containerName="installer" containerID="cri-o://b0dcb1ce3f83cdc7e987fd3620c293c92a96041a404cf4029410b007b2c1b26d" gracePeriod=30 Feb 19 03:27:02.634580 master-0 kubenswrapper[33867]: I0219 03:27:02.634516 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_8e065576-b204-4579-8c25-1bb46cc88738/installer/0.log" Feb 19 03:27:02.634580 master-0 kubenswrapper[33867]: I0219 03:27:02.634585 33867 generic.go:334] "Generic (PLEG): container finished" podID="8e065576-b204-4579-8c25-1bb46cc88738" containerID="b0dcb1ce3f83cdc7e987fd3620c293c92a96041a404cf4029410b007b2c1b26d" exitCode=1 Feb 19 03:27:02.635461 master-0 kubenswrapper[33867]: I0219 03:27:02.634620 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"8e065576-b204-4579-8c25-1bb46cc88738","Type":"ContainerDied","Data":"b0dcb1ce3f83cdc7e987fd3620c293c92a96041a404cf4029410b007b2c1b26d"} Feb 19 03:27:03.279304 master-0 kubenswrapper[33867]: I0219 03:27:03.279227 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_8e065576-b204-4579-8c25-1bb46cc88738/installer/0.log" Feb 19 03:27:03.279481 master-0 kubenswrapper[33867]: I0219 03:27:03.279321 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:27:03.414695 master-0 kubenswrapper[33867]: I0219 03:27:03.414588 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock\") pod \"8e065576-b204-4579-8c25-1bb46cc88738\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " Feb 19 03:27:03.414695 master-0 kubenswrapper[33867]: I0219 03:27:03.414692 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access\") pod \"8e065576-b204-4579-8c25-1bb46cc88738\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " Feb 19 03:27:03.415027 master-0 kubenswrapper[33867]: I0219 03:27:03.414778 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir\") pod \"8e065576-b204-4579-8c25-1bb46cc88738\" (UID: \"8e065576-b204-4579-8c25-1bb46cc88738\") " Feb 19 03:27:03.415027 master-0 kubenswrapper[33867]: I0219 03:27:03.414840 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e065576-b204-4579-8c25-1bb46cc88738" (UID: "8e065576-b204-4579-8c25-1bb46cc88738"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:03.415027 master-0 kubenswrapper[33867]: I0219 03:27:03.414961 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e065576-b204-4579-8c25-1bb46cc88738" (UID: "8e065576-b204-4579-8c25-1bb46cc88738"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:03.415408 master-0 kubenswrapper[33867]: I0219 03:27:03.415368 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:03.415408 master-0 kubenswrapper[33867]: I0219 03:27:03.415403 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e065576-b204-4579-8c25-1bb46cc88738-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:03.418287 master-0 kubenswrapper[33867]: I0219 03:27:03.418207 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e065576-b204-4579-8c25-1bb46cc88738" (UID: "8e065576-b204-4579-8c25-1bb46cc88738"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:03.517662 master-0 kubenswrapper[33867]: I0219 03:27:03.517422 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e065576-b204-4579-8c25-1bb46cc88738-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:03.648626 master-0 kubenswrapper[33867]: I0219 03:27:03.648544 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_8e065576-b204-4579-8c25-1bb46cc88738/installer/0.log" Feb 19 03:27:03.649326 master-0 kubenswrapper[33867]: I0219 03:27:03.648673 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"8e065576-b204-4579-8c25-1bb46cc88738","Type":"ContainerDied","Data":"690f64c14bf6a1ebdb73f0846930429e010aef3aae20dd58c6c75d7ef87420df"} Feb 19 03:27:03.649326 master-0 kubenswrapper[33867]: I0219 03:27:03.648762 33867 scope.go:117] "RemoveContainer" containerID="b0dcb1ce3f83cdc7e987fd3620c293c92a96041a404cf4029410b007b2c1b26d" Feb 19 03:27:03.649326 master-0 kubenswrapper[33867]: I0219 03:27:03.648772 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 19 03:27:04.298592 master-0 kubenswrapper[33867]: I0219 03:27:04.298493 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:27:04.601247 master-0 kubenswrapper[33867]: I0219 03:27:04.600675 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 19 03:27:04.965275 master-0 kubenswrapper[33867]: I0219 03:27:04.964897 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e065576-b204-4579-8c25-1bb46cc88738" path="/var/lib/kubelet/pods/8e065576-b204-4579-8c25-1bb46cc88738/volumes" Feb 19 03:27:05.525423 master-0 kubenswrapper[33867]: I0219 03:27:05.525354 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 19 03:27:05.526103 master-0 kubenswrapper[33867]: E0219 03:27:05.526076 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" containerName="installer" Feb 19 03:27:05.526246 master-0 kubenswrapper[33867]: I0219 03:27:05.526226 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" containerName="installer" Feb 19 03:27:05.526526 master-0 kubenswrapper[33867]: E0219 03:27:05.526494 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e065576-b204-4579-8c25-1bb46cc88738" containerName="installer" Feb 19 03:27:05.526715 master-0 kubenswrapper[33867]: I0219 03:27:05.526689 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e065576-b204-4579-8c25-1bb46cc88738" containerName="installer" Feb 19 03:27:05.527113 master-0 kubenswrapper[33867]: I0219 03:27:05.527090 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fbcb8d-b3b4-4d0b-bf13-1c2fdd78e212" containerName="installer" Feb 19 03:27:05.527326 master-0 kubenswrapper[33867]: I0219 03:27:05.527298 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e065576-b204-4579-8c25-1bb46cc88738" containerName="installer" Feb 19 03:27:05.528375 master-0 kubenswrapper[33867]: I0219 03:27:05.528336 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.531192 master-0 kubenswrapper[33867]: I0219 03:27:05.530772 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-l5ps6" Feb 19 03:27:05.531412 master-0 kubenswrapper[33867]: I0219 03:27:05.531343 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 03:27:05.538160 master-0 kubenswrapper[33867]: I0219 03:27:05.538093 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 19 03:27:05.653537 master-0 kubenswrapper[33867]: I0219 03:27:05.653431 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.653792 master-0 kubenswrapper[33867]: I0219 03:27:05.653590 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.653792 master-0 kubenswrapper[33867]: I0219 03:27:05.653730 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.754996 master-0 kubenswrapper[33867]: I0219 03:27:05.754898 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.755312 master-0 kubenswrapper[33867]: I0219 03:27:05.755216 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.755473 master-0 kubenswrapper[33867]: I0219 03:27:05.755397 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.755933 master-0 kubenswrapper[33867]: I0219 03:27:05.755870 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.756075 master-0 kubenswrapper[33867]: I0219 03:27:05.755959 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.762964 master-0 kubenswrapper[33867]: E0219 03:27:05.762898 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:05.784477 master-0 kubenswrapper[33867]: I0219 03:27:05.784308 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access\") pod \"installer-7-master-0\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:05.868247 master-0 kubenswrapper[33867]: I0219 03:27:05.868129 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:06.324362 master-0 kubenswrapper[33867]: I0219 03:27:06.324179 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 19 03:27:06.335860 master-0 kubenswrapper[33867]: W0219 03:27:06.335791 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda7adce7b_f079_455e_8377_84c40cfc2557.slice/crio-aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3 WatchSource:0}: Error finding container aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3: Status 404 returned error can't find the container with id aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3 Feb 19 03:27:06.676216 master-0 kubenswrapper[33867]: I0219 03:27:06.676125 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"a7adce7b-f079-455e-8377-84c40cfc2557","Type":"ContainerStarted","Data":"aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3"} Feb 19 03:27:07.686423 master-0 kubenswrapper[33867]: I0219 03:27:07.686358 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"a7adce7b-f079-455e-8377-84c40cfc2557","Type":"ContainerStarted","Data":"fb7cb4ae99e8de98e0d3080008a103708808bdb27e92225dfed5168dfffc810f"} Feb 19 03:27:07.709501 master-0 kubenswrapper[33867]: I0219 03:27:07.709386 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-7-master-0" podStartSLOduration=2.7093672250000003 podStartE2EDuration="2.709367225s" podCreationTimestamp="2026-02-19 03:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:07.704372185 +0000 UTC m=+233.001042816" watchObservedRunningTime="2026-02-19 03:27:07.709367225 +0000 UTC m=+233.006037846" Feb 19 03:27:08.466606 master-0 kubenswrapper[33867]: E0219 03:27:08.466530 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:08.475211 master-0 kubenswrapper[33867]: E0219 03:27:08.475125 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:11.104185 master-0 kubenswrapper[33867]: I0219 03:27:11.104091 33867 patch_prober.go:28] interesting pod/console-677f65b5df-p8qrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Feb 19 03:27:11.104185 master-0 kubenswrapper[33867]: I0219 03:27:11.104164 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Feb 19 03:27:11.671315 master-0 kubenswrapper[33867]: I0219 03:27:11.671221 33867 patch_prober.go:28] interesting pod/console-6b9ffbb744-xzn8r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Feb 19 03:27:11.671592 master-0 kubenswrapper[33867]: I0219 03:27:11.671334 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Feb 19 03:27:14.116063 master-0 kubenswrapper[33867]: I0219 03:27:14.115958 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:14.169340 master-0 kubenswrapper[33867]: I0219 03:27:14.169238 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:14.806410 master-0 kubenswrapper[33867]: I0219 03:27:14.806341 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:16.483722 master-0 kubenswrapper[33867]: I0219 03:27:16.483626 33867 scope.go:117] "RemoveContainer" containerID="2d484b07e94495906a9ef1c8f980fb107c93c95a40a52c0019224db82b51fc4d" Feb 19 03:27:18.728312 master-0 kubenswrapper[33867]: E0219 03:27:18.728230 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:19.453649 master-0 kubenswrapper[33867]: I0219 03:27:19.453581 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-677f65b5df-p8qrj"] Feb 19 03:27:19.517550 master-0 kubenswrapper[33867]: I0219 03:27:19.517483 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-586d7bfb96-dg45z"] Feb 19 03:27:19.518415 master-0 kubenswrapper[33867]: I0219 03:27:19.518393 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.533197 master-0 kubenswrapper[33867]: I0219 03:27:19.533127 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-586d7bfb96-dg45z"] Feb 19 03:27:19.608212 master-0 kubenswrapper[33867]: I0219 03:27:19.608143 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608212 master-0 kubenswrapper[33867]: I0219 03:27:19.608202 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608212 master-0 kubenswrapper[33867]: I0219 03:27:19.608221 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608532 master-0 kubenswrapper[33867]: I0219 03:27:19.608397 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvpc\" (UniqueName: \"kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608532 master-0 kubenswrapper[33867]: I0219 03:27:19.608482 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608532 master-0 kubenswrapper[33867]: I0219 03:27:19.608527 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.608676 master-0 kubenswrapper[33867]: I0219 03:27:19.608649 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711110 master-0 kubenswrapper[33867]: I0219 03:27:19.710915 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvpc\" (UniqueName: \"kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711110 master-0 kubenswrapper[33867]: I0219 03:27:19.711086 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711394 master-0 kubenswrapper[33867]: I0219 03:27:19.711137 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711394 master-0 kubenswrapper[33867]: I0219 03:27:19.711247 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711460 master-0 kubenswrapper[33867]: I0219 03:27:19.711395 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711493 master-0 kubenswrapper[33867]: I0219 03:27:19.711460 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.711562 master-0 kubenswrapper[33867]: I0219 03:27:19.711514 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.712612 master-0 kubenswrapper[33867]: I0219 03:27:19.712427 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.713121 master-0 kubenswrapper[33867]: I0219 03:27:19.713084 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.713417 master-0 kubenswrapper[33867]: I0219 03:27:19.713367 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.714135 master-0 kubenswrapper[33867]: I0219 03:27:19.714082 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.715162 master-0 kubenswrapper[33867]: I0219 03:27:19.715130 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.715357 master-0 kubenswrapper[33867]: I0219 03:27:19.715333 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.728023 master-0 kubenswrapper[33867]: I0219 03:27:19.727976 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvpc\" (UniqueName: \"kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc\") pod \"console-586d7bfb96-dg45z\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:19.873105 master-0 kubenswrapper[33867]: I0219 03:27:19.873040 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:20.278857 master-0 kubenswrapper[33867]: I0219 03:27:20.278795 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-586d7bfb96-dg45z"] Feb 19 03:27:20.288701 master-0 kubenswrapper[33867]: W0219 03:27:20.288625 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod224edf60_62d9_4e76_b1d7_6e6b92e8ad00.slice/crio-5067c2b4ce99fee2e084e11a565d79b3b118cdecdc797d9e6a756ad9acf58d13 WatchSource:0}: Error finding container 5067c2b4ce99fee2e084e11a565d79b3b118cdecdc797d9e6a756ad9acf58d13: Status 404 returned error can't find the container with id 5067c2b4ce99fee2e084e11a565d79b3b118cdecdc797d9e6a756ad9acf58d13 Feb 19 03:27:20.623871 master-0 kubenswrapper[33867]: E0219 03:27:20.623717 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:20.667932 master-0 kubenswrapper[33867]: I0219 03:27:20.667826 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b9ffbb744-xzn8r"] Feb 19 03:27:20.697800 master-0 kubenswrapper[33867]: I0219 03:27:20.697734 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:27:20.698903 master-0 kubenswrapper[33867]: I0219 03:27:20.698858 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.724287 master-0 kubenswrapper[33867]: I0219 03:27:20.717992 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:27:20.798975 master-0 kubenswrapper[33867]: I0219 03:27:20.798924 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-586d7bfb96-dg45z" event={"ID":"224edf60-62d9-4e76-b1d7-6e6b92e8ad00","Type":"ContainerStarted","Data":"87b6062a0c7f765f7173431f0d930f2e9ea39c02af2a56f8c2be9c07403ac211"} Feb 19 03:27:20.798975 master-0 kubenswrapper[33867]: I0219 03:27:20.798978 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-586d7bfb96-dg45z" event={"ID":"224edf60-62d9-4e76-b1d7-6e6b92e8ad00","Type":"ContainerStarted","Data":"5067c2b4ce99fee2e084e11a565d79b3b118cdecdc797d9e6a756ad9acf58d13"} Feb 19 03:27:20.818889 master-0 kubenswrapper[33867]: I0219 03:27:20.818791 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-586d7bfb96-dg45z" podStartSLOduration=1.818772915 podStartE2EDuration="1.818772915s" podCreationTimestamp="2026-02-19 03:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:20.814816405 +0000 UTC m=+246.111487016" watchObservedRunningTime="2026-02-19 03:27:20.818772915 +0000 UTC m=+246.115443526" Feb 19 03:27:20.831754 master-0 kubenswrapper[33867]: I0219 03:27:20.831680 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.831900 master-0 kubenswrapper[33867]: I0219 03:27:20.831762 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.831900 master-0 kubenswrapper[33867]: I0219 03:27:20.831791 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.831900 master-0 kubenswrapper[33867]: I0219 03:27:20.831825 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.831900 master-0 kubenswrapper[33867]: I0219 03:27:20.831869 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722wv\" (UniqueName: \"kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.832027 master-0 kubenswrapper[33867]: I0219 03:27:20.831940 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.832027 master-0 kubenswrapper[33867]: I0219 03:27:20.831966 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.933438 master-0 kubenswrapper[33867]: I0219 03:27:20.933355 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.933923 master-0 kubenswrapper[33867]: I0219 03:27:20.933681 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.933923 master-0 kubenswrapper[33867]: I0219 03:27:20.933904 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.934569 master-0 kubenswrapper[33867]: I0219 03:27:20.934006 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.934650 master-0 kubenswrapper[33867]: I0219 03:27:20.934622 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.934650 master-0 kubenswrapper[33867]: I0219 03:27:20.934374 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.934879 master-0 kubenswrapper[33867]: I0219 03:27:20.934856 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.934976 master-0 kubenswrapper[33867]: I0219 03:27:20.934948 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722wv\" (UniqueName: \"kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.935374 master-0 kubenswrapper[33867]: I0219 03:27:20.935346 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.935738 master-0 kubenswrapper[33867]: I0219 03:27:20.935678 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.936244 master-0 kubenswrapper[33867]: I0219 03:27:20.936193 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.937897 master-0 kubenswrapper[33867]: I0219 03:27:20.937862 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.939886 master-0 kubenswrapper[33867]: I0219 03:27:20.939774 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:20.952696 master-0 kubenswrapper[33867]: I0219 03:27:20.952643 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722wv\" (UniqueName: \"kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv\") pod \"console-84d59b44c5-nczqx\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:21.070317 master-0 kubenswrapper[33867]: I0219 03:27:21.070214 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:21.472336 master-0 kubenswrapper[33867]: I0219 03:27:21.472183 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:27:21.478943 master-0 kubenswrapper[33867]: W0219 03:27:21.478880 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b0e9bf_7094_43f4_9904_aa27aa9d7b9a.slice/crio-f2194d72f0729162ac8f722d88a431e0bc7bdf989537e0d69b1698aca0af4aef WatchSource:0}: Error finding container f2194d72f0729162ac8f722d88a431e0bc7bdf989537e0d69b1698aca0af4aef: Status 404 returned error can't find the container with id f2194d72f0729162ac8f722d88a431e0bc7bdf989537e0d69b1698aca0af4aef Feb 19 03:27:21.806412 master-0 kubenswrapper[33867]: I0219 03:27:21.806240 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84d59b44c5-nczqx" event={"ID":"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a","Type":"ContainerStarted","Data":"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3"} Feb 19 03:27:21.806412 master-0 kubenswrapper[33867]: I0219 03:27:21.806315 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84d59b44c5-nczqx" event={"ID":"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a","Type":"ContainerStarted","Data":"f2194d72f0729162ac8f722d88a431e0bc7bdf989537e0d69b1698aca0af4aef"} Feb 19 03:27:21.824038 master-0 kubenswrapper[33867]: I0219 03:27:21.823959 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84d59b44c5-nczqx" podStartSLOduration=1.823938204 podStartE2EDuration="1.823938204s" podCreationTimestamp="2026-02-19 03:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:21.822616857 +0000 UTC m=+247.119287478" watchObservedRunningTime="2026-02-19 03:27:21.823938204 +0000 UTC m=+247.120608815" Feb 19 03:27:22.869045 master-0 kubenswrapper[33867]: I0219 03:27:22.868956 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:22.869610 master-0 kubenswrapper[33867]: I0219 03:27:22.869431 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="alertmanager" containerID="cri-o://e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956" gracePeriod=120 Feb 19 03:27:22.869610 master-0 kubenswrapper[33867]: I0219 03:27:22.869459 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-metric" containerID="cri-o://7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14" gracePeriod=120 Feb 19 03:27:22.869610 master-0 kubenswrapper[33867]: I0219 03:27:22.869506 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-web" containerID="cri-o://5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38" gracePeriod=120 Feb 19 03:27:22.869770 master-0 kubenswrapper[33867]: I0219 03:27:22.869616 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="prom-label-proxy" containerID="cri-o://179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df" gracePeriod=120 Feb 19 03:27:22.869770 master-0 kubenswrapper[33867]: I0219 03:27:22.869610 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="config-reloader" containerID="cri-o://bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678" gracePeriod=120 Feb 19 03:27:22.869770 master-0 kubenswrapper[33867]: I0219 03:27:22.869678 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy" containerID="cri-o://92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb" gracePeriod=120 Feb 19 03:27:23.476307 master-0 kubenswrapper[33867]: E0219 03:27:23.476139 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833797 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df" exitCode=0 Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833858 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14" exitCode=0 Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833866 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb" exitCode=0 Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833875 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678" exitCode=0 Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833882 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956" exitCode=0 Feb 19 03:27:23.833940 master-0 kubenswrapper[33867]: I0219 03:27:23.833892 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df"} Feb 19 03:27:23.834390 master-0 kubenswrapper[33867]: I0219 03:27:23.833955 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14"} Feb 19 03:27:23.834390 master-0 kubenswrapper[33867]: I0219 03:27:23.833970 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb"} Feb 19 03:27:23.834390 master-0 kubenswrapper[33867]: I0219 03:27:23.833981 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678"} Feb 19 03:27:23.834390 master-0 kubenswrapper[33867]: I0219 03:27:23.833992 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956"} Feb 19 03:27:24.379478 master-0 kubenswrapper[33867]: I0219 03:27:24.378622 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:24.423444 master-0 kubenswrapper[33867]: I0219 03:27:24.423364 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423444 master-0 kubenswrapper[33867]: I0219 03:27:24.423422 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423444 master-0 kubenswrapper[33867]: I0219 03:27:24.423453 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423472 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423494 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423517 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423542 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423575 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.423770 master-0 kubenswrapper[33867]: I0219 03:27:24.423608 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.424365 master-0 kubenswrapper[33867]: I0219 03:27:24.424323 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:24.424901 master-0 kubenswrapper[33867]: I0219 03:27:24.424819 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:27:24.426505 master-0 kubenswrapper[33867]: I0219 03:27:24.426456 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume" (OuterVolumeSpecName: "config-volume") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.428058 master-0 kubenswrapper[33867]: I0219 03:27:24.427979 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:24.438363 master-0 kubenswrapper[33867]: I0219 03:27:24.438295 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.439112 master-0 kubenswrapper[33867]: I0219 03:27:24.439041 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.439245 master-0 kubenswrapper[33867]: I0219 03:27:24.439140 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.443431 master-0 kubenswrapper[33867]: I0219 03:27:24.443334 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out" (OuterVolumeSpecName: "config-out") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:27:24.482541 master-0 kubenswrapper[33867]: I0219 03:27:24.482495 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config" (OuterVolumeSpecName: "web-config") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.524804 master-0 kubenswrapper[33867]: I0219 03:27:24.524753 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.525049 master-0 kubenswrapper[33867]: I0219 03:27:24.524830 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r5sh\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.525049 master-0 kubenswrapper[33867]: I0219 03:27:24.524855 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls\") pod \"b558ca3e-01df-4a0a-8f76-e81247053c03\" (UID: \"b558ca3e-01df-4a0a-8f76-e81247053c03\") " Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525077 33867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525093 33867 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525106 33867 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525119 33867 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-config-out\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525132 33867 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525143 33867 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.525147 master-0 kubenswrapper[33867]: I0219 03:27:24.525154 33867 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.526140 master-0 kubenswrapper[33867]: I0219 03:27:24.525163 33867 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-web-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.526140 master-0 kubenswrapper[33867]: I0219 03:27:24.525172 33867 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.526140 master-0 kubenswrapper[33867]: I0219 03:27:24.525230 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:24.527701 master-0 kubenswrapper[33867]: I0219 03:27:24.527659 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:24.528134 master-0 kubenswrapper[33867]: I0219 03:27:24.528065 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh" (OuterVolumeSpecName: "kube-api-access-9r5sh") pod "b558ca3e-01df-4a0a-8f76-e81247053c03" (UID: "b558ca3e-01df-4a0a-8f76-e81247053c03"). InnerVolumeSpecName "kube-api-access-9r5sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:24.626275 master-0 kubenswrapper[33867]: I0219 03:27:24.626176 33867 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b558ca3e-01df-4a0a-8f76-e81247053c03-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.626275 master-0 kubenswrapper[33867]: I0219 03:27:24.626222 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r5sh\" (UniqueName: \"kubernetes.io/projected/b558ca3e-01df-4a0a-8f76-e81247053c03-kube-api-access-9r5sh\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.626275 master-0 kubenswrapper[33867]: I0219 03:27:24.626233 33867 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b558ca3e-01df-4a0a-8f76-e81247053c03-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:24.847446 master-0 kubenswrapper[33867]: I0219 03:27:24.847331 33867 generic.go:334] "Generic (PLEG): container finished" podID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerID="5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38" exitCode=0 Feb 19 03:27:24.847446 master-0 kubenswrapper[33867]: I0219 03:27:24.847391 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38"} Feb 19 03:27:24.847446 master-0 kubenswrapper[33867]: I0219 03:27:24.847453 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b558ca3e-01df-4a0a-8f76-e81247053c03","Type":"ContainerDied","Data":"b66494c48119740bc6edfb285e35655e715735720f72b6bb4c3bc84ad9b7f5c0"} Feb 19 03:27:24.847446 master-0 kubenswrapper[33867]: I0219 03:27:24.847485 33867 scope.go:117] "RemoveContainer" containerID="179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df" Feb 19 03:27:24.847446 master-0 kubenswrapper[33867]: I0219 03:27:24.847493 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:24.874176 master-0 kubenswrapper[33867]: I0219 03:27:24.874064 33867 scope.go:117] "RemoveContainer" containerID="7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14" Feb 19 03:27:24.899983 master-0 kubenswrapper[33867]: I0219 03:27:24.899916 33867 scope.go:117] "RemoveContainer" containerID="92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb" Feb 19 03:27:24.905245 master-0 kubenswrapper[33867]: I0219 03:27:24.904851 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:24.915462 master-0 kubenswrapper[33867]: I0219 03:27:24.915371 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:24.929777 master-0 kubenswrapper[33867]: I0219 03:27:24.929727 33867 scope.go:117] "RemoveContainer" containerID="5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38" Feb 19 03:27:24.944401 master-0 kubenswrapper[33867]: I0219 03:27:24.944335 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:24.944728 master-0 kubenswrapper[33867]: E0219 03:27:24.944698 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy" Feb 19 03:27:24.944728 master-0 kubenswrapper[33867]: I0219 03:27:24.944720 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944742 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="prom-label-proxy" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944750 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="prom-label-proxy" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944796 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="config-reloader" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944803 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="config-reloader" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944819 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-web" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944827 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-web" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944844 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="alertmanager" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944850 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="alertmanager" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944860 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="init-config-reloader" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944866 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="init-config-reloader" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: E0219 03:27:24.944888 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-metric" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.944895 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-metric" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945034 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="config-reloader" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945073 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945089 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-web" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945098 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="prom-label-proxy" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945115 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="kube-rbac-proxy-metric" Feb 19 03:27:24.946964 master-0 kubenswrapper[33867]: I0219 03:27:24.945125 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" containerName="alertmanager" Feb 19 03:27:24.951338 master-0 kubenswrapper[33867]: I0219 03:27:24.950821 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:24.954592 master-0 kubenswrapper[33867]: I0219 03:27:24.953697 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-qlddr" Feb 19 03:27:24.954592 master-0 kubenswrapper[33867]: I0219 03:27:24.954096 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 19 03:27:24.954592 master-0 kubenswrapper[33867]: I0219 03:27:24.954279 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 19 03:27:24.954592 master-0 kubenswrapper[33867]: I0219 03:27:24.954414 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 19 03:27:24.954592 master-0 kubenswrapper[33867]: I0219 03:27:24.954569 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 19 03:27:24.954867 master-0 kubenswrapper[33867]: I0219 03:27:24.954672 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 19 03:27:24.955504 master-0 kubenswrapper[33867]: I0219 03:27:24.955470 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 19 03:27:24.955678 master-0 kubenswrapper[33867]: I0219 03:27:24.955653 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 19 03:27:24.963884 master-0 kubenswrapper[33867]: I0219 03:27:24.963832 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 19 03:27:24.967824 master-0 kubenswrapper[33867]: I0219 03:27:24.967779 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b558ca3e-01df-4a0a-8f76-e81247053c03" path="/var/lib/kubelet/pods/b558ca3e-01df-4a0a-8f76-e81247053c03/volumes" Feb 19 03:27:24.969105 master-0 kubenswrapper[33867]: I0219 03:27:24.969077 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:24.969611 master-0 kubenswrapper[33867]: I0219 03:27:24.969562 33867 scope.go:117] "RemoveContainer" containerID="bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678" Feb 19 03:27:24.999846 master-0 kubenswrapper[33867]: I0219 03:27:24.997126 33867 scope.go:117] "RemoveContainer" containerID="e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956" Feb 19 03:27:25.033245 master-0 kubenswrapper[33867]: I0219 03:27:25.033184 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.033486 master-0 kubenswrapper[33867]: I0219 03:27:25.033310 33867 scope.go:117] "RemoveContainer" containerID="42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe" Feb 19 03:27:25.033486 master-0 kubenswrapper[33867]: I0219 03:27:25.033402 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mht\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-kube-api-access-n5mht\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.033574 master-0 kubenswrapper[33867]: I0219 03:27:25.033549 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-config-volume\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.033636 master-0 kubenswrapper[33867]: I0219 03:27:25.033614 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-web-config\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.033836 master-0 kubenswrapper[33867]: I0219 03:27:25.033807 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.033914 master-0 kubenswrapper[33867]: I0219 03:27:25.033873 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034071 master-0 kubenswrapper[33867]: I0219 03:27:25.034002 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034151 master-0 kubenswrapper[33867]: I0219 03:27:25.034108 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-config-out\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034192 master-0 kubenswrapper[33867]: I0219 03:27:25.034150 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034301 master-0 kubenswrapper[33867]: I0219 03:27:25.034240 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034380 master-0 kubenswrapper[33867]: I0219 03:27:25.034363 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.034446 master-0 kubenswrapper[33867]: I0219 03:27:25.034434 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.056122 master-0 kubenswrapper[33867]: I0219 03:27:25.055941 33867 scope.go:117] "RemoveContainer" containerID="179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df" Feb 19 03:27:25.056576 master-0 kubenswrapper[33867]: E0219 03:27:25.056422 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df\": container with ID starting with 179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df not found: ID does not exist" containerID="179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df" Feb 19 03:27:25.056576 master-0 kubenswrapper[33867]: I0219 03:27:25.056456 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df"} err="failed to get container status \"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df\": rpc error: code = NotFound desc = could not find container \"179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df\": container with ID starting with 179bdb0a922c4a923b2b9aa6215f380e4a58c637905ce0820433d83673b0f6df not found: ID does not exist" Feb 19 03:27:25.056576 master-0 kubenswrapper[33867]: I0219 03:27:25.056491 33867 scope.go:117] "RemoveContainer" containerID="7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14" Feb 19 03:27:25.057047 master-0 kubenswrapper[33867]: E0219 03:27:25.056998 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14\": container with ID starting with 7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14 not found: ID does not exist" containerID="7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14" Feb 19 03:27:25.057110 master-0 kubenswrapper[33867]: I0219 03:27:25.057056 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14"} err="failed to get container status \"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14\": rpc error: code = NotFound desc = could not find container \"7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14\": container with ID starting with 7a2afdf842304e27a51ad8737da498b7c71947ca35daddabc28baec445ee7d14 not found: ID does not exist" Feb 19 03:27:25.057110 master-0 kubenswrapper[33867]: I0219 03:27:25.057086 33867 scope.go:117] "RemoveContainer" containerID="92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb" Feb 19 03:27:25.057623 master-0 kubenswrapper[33867]: E0219 03:27:25.057506 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb\": container with ID starting with 92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb not found: ID does not exist" containerID="92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb" Feb 19 03:27:25.057623 master-0 kubenswrapper[33867]: I0219 03:27:25.057538 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb"} err="failed to get container status \"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb\": rpc error: code = NotFound desc = could not find container \"92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb\": container with ID starting with 92a7b64932f7cdbc69be71996463761b6a6c06fd0667bc0045d44f063c28fceb not found: ID does not exist" Feb 19 03:27:25.057623 master-0 kubenswrapper[33867]: I0219 03:27:25.057559 33867 scope.go:117] "RemoveContainer" containerID="5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38" Feb 19 03:27:25.057932 master-0 kubenswrapper[33867]: E0219 03:27:25.057893 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38\": container with ID starting with 5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38 not found: ID does not exist" containerID="5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38" Feb 19 03:27:25.057985 master-0 kubenswrapper[33867]: I0219 03:27:25.057940 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38"} err="failed to get container status \"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38\": rpc error: code = NotFound desc = could not find container \"5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38\": container with ID starting with 5bb6b231dc587841b711d428c6b379f3a3a0da802fca4865e6c0ab0c7a4fdd38 not found: ID does not exist" Feb 19 03:27:25.057985 master-0 kubenswrapper[33867]: I0219 03:27:25.057978 33867 scope.go:117] "RemoveContainer" containerID="bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678" Feb 19 03:27:25.058503 master-0 kubenswrapper[33867]: E0219 03:27:25.058394 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678\": container with ID starting with bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678 not found: ID does not exist" containerID="bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678" Feb 19 03:27:25.058503 master-0 kubenswrapper[33867]: I0219 03:27:25.058420 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678"} err="failed to get container status \"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678\": rpc error: code = NotFound desc = could not find container \"bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678\": container with ID starting with bc3dc73d5a205e1970a43a36267f4a45b5af1a867060028619d044ccf1325678 not found: ID does not exist" Feb 19 03:27:25.058503 master-0 kubenswrapper[33867]: I0219 03:27:25.058436 33867 scope.go:117] "RemoveContainer" containerID="e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956" Feb 19 03:27:25.058776 master-0 kubenswrapper[33867]: E0219 03:27:25.058742 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956\": container with ID starting with e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956 not found: ID does not exist" containerID="e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956" Feb 19 03:27:25.058836 master-0 kubenswrapper[33867]: I0219 03:27:25.058781 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956"} err="failed to get container status \"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956\": rpc error: code = NotFound desc = could not find container \"e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956\": container with ID starting with e5482746abd70f816148935fe4d72b17aa83404a3a5d3580597b8942864a8956 not found: ID does not exist" Feb 19 03:27:25.058836 master-0 kubenswrapper[33867]: I0219 03:27:25.058810 33867 scope.go:117] "RemoveContainer" containerID="42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe" Feb 19 03:27:25.059326 master-0 kubenswrapper[33867]: E0219 03:27:25.059298 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe\": container with ID starting with 42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe not found: ID does not exist" containerID="42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe" Feb 19 03:27:25.059392 master-0 kubenswrapper[33867]: I0219 03:27:25.059329 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe"} err="failed to get container status \"42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe\": rpc error: code = NotFound desc = could not find container \"42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe\": container with ID starting with 42fa54296f3057643f9869589a455931250b9b867a2f11939f14f7b69040d6fe not found: ID does not exist" Feb 19 03:27:25.136816 master-0 kubenswrapper[33867]: I0219 03:27:25.136657 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5mht\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-kube-api-access-n5mht\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.136816 master-0 kubenswrapper[33867]: I0219 03:27:25.136760 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-config-volume\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137104 master-0 kubenswrapper[33867]: I0219 03:27:25.136834 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-web-config\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137104 master-0 kubenswrapper[33867]: I0219 03:27:25.136927 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137104 master-0 kubenswrapper[33867]: I0219 03:27:25.136962 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137104 master-0 kubenswrapper[33867]: I0219 03:27:25.137018 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137104 master-0 kubenswrapper[33867]: I0219 03:27:25.137057 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-config-out\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137359 master-0 kubenswrapper[33867]: I0219 03:27:25.137109 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137359 master-0 kubenswrapper[33867]: I0219 03:27:25.137132 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137359 master-0 kubenswrapper[33867]: I0219 03:27:25.137190 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137359 master-0 kubenswrapper[33867]: I0219 03:27:25.137247 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.137359 master-0 kubenswrapper[33867]: I0219 03:27:25.137346 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.139043 master-0 kubenswrapper[33867]: I0219 03:27:25.138570 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.139043 master-0 kubenswrapper[33867]: I0219 03:27:25.138982 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.139551 master-0 kubenswrapper[33867]: I0219 03:27:25.139511 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f575aff7-687b-4fd9-8d50-22cee2314277-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.141815 master-0 kubenswrapper[33867]: I0219 03:27:25.141761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.142409 master-0 kubenswrapper[33867]: I0219 03:27:25.142365 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.142409 master-0 kubenswrapper[33867]: I0219 03:27:25.141810 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-web-config\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.142684 master-0 kubenswrapper[33867]: I0219 03:27:25.142646 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-config-volume\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.142938 master-0 kubenswrapper[33867]: I0219 03:27:25.142916 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.143144 master-0 kubenswrapper[33867]: I0219 03:27:25.143096 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.143222 master-0 kubenswrapper[33867]: I0219 03:27:25.143191 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f575aff7-687b-4fd9-8d50-22cee2314277-config-out\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.143426 master-0 kubenswrapper[33867]: I0219 03:27:25.143405 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f575aff7-687b-4fd9-8d50-22cee2314277-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.160730 master-0 kubenswrapper[33867]: I0219 03:27:25.160685 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5mht\" (UniqueName: \"kubernetes.io/projected/f575aff7-687b-4fd9-8d50-22cee2314277-kube-api-access-n5mht\") pod \"alertmanager-main-0\" (UID: \"f575aff7-687b-4fd9-8d50-22cee2314277\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.295844 master-0 kubenswrapper[33867]: I0219 03:27:25.295789 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 03:27:25.723498 master-0 kubenswrapper[33867]: I0219 03:27:25.723144 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 03:27:25.865875 master-0 kubenswrapper[33867]: I0219 03:27:25.865806 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"31e0d6ac2140056bd81f380273fbd1501379fbb1c788e824a6462d6e54f69e62"} Feb 19 03:27:26.804282 master-0 kubenswrapper[33867]: I0219 03:27:26.799471 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:27:26.887690 master-0 kubenswrapper[33867]: I0219 03:27:26.887629 33867 generic.go:334] "Generic (PLEG): container finished" podID="f575aff7-687b-4fd9-8d50-22cee2314277" containerID="63a4c58fb6ecbf730644944037caf05a28f05088bd1ba794d90431887f99d4a0" exitCode=0 Feb 19 03:27:26.887892 master-0 kubenswrapper[33867]: I0219 03:27:26.887714 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerDied","Data":"63a4c58fb6ecbf730644944037caf05a28f05088bd1ba794d90431887f99d4a0"} Feb 19 03:27:26.893549 master-0 kubenswrapper[33867]: I0219 03:27:26.893506 33867 generic.go:334] "Generic (PLEG): container finished" podID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" containerID="0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c" exitCode=0 Feb 19 03:27:26.893631 master-0 kubenswrapper[33867]: I0219 03:27:26.893582 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" event={"ID":"22370ccf-c383-4c1e-96f2-b5c61bb0cebe","Type":"ContainerDied","Data":"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c"} Feb 19 03:27:26.893631 master-0 kubenswrapper[33867]: I0219 03:27:26.893609 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" event={"ID":"22370ccf-c383-4c1e-96f2-b5c61bb0cebe","Type":"ContainerDied","Data":"383b491b9f27144fe9b7a96c0308977fdc414552864afb1ce6b22fbacc40b8ac"} Feb 19 03:27:26.893631 master-0 kubenswrapper[33867]: I0219 03:27:26.893629 33867 scope.go:117] "RemoveContainer" containerID="0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c" Feb 19 03:27:26.893729 master-0 kubenswrapper[33867]: I0219 03:27:26.893706 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68d9f4c46b-mh59n" Feb 19 03:27:26.936017 master-0 kubenswrapper[33867]: I0219 03:27:26.935990 33867 scope.go:117] "RemoveContainer" containerID="0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c" Feb 19 03:27:26.936790 master-0 kubenswrapper[33867]: E0219 03:27:26.936742 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c\": container with ID starting with 0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c not found: ID does not exist" containerID="0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c" Feb 19 03:27:26.936865 master-0 kubenswrapper[33867]: I0219 03:27:26.936805 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c"} err="failed to get container status \"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c\": rpc error: code = NotFound desc = could not find container \"0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c\": container with ID starting with 0ae6d1d47b008a96622eeb3668eafe64b4b1d508cf72dceaf91b354fbc5deb8c not found: ID does not exist" Feb 19 03:27:26.965195 master-0 kubenswrapper[33867]: I0219 03:27:26.965141 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965335 master-0 kubenswrapper[33867]: I0219 03:27:26.965225 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965398 master-0 kubenswrapper[33867]: I0219 03:27:26.965331 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965398 master-0 kubenswrapper[33867]: I0219 03:27:26.965377 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965486 master-0 kubenswrapper[33867]: I0219 03:27:26.965410 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965556 master-0 kubenswrapper[33867]: I0219 03:27:26.965532 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.965610 master-0 kubenswrapper[33867]: I0219 03:27:26.965583 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") pod \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\" (UID: \"22370ccf-c383-4c1e-96f2-b5c61bb0cebe\") " Feb 19 03:27:26.966765 master-0 kubenswrapper[33867]: I0219 03:27:26.966033 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:26.966765 master-0 kubenswrapper[33867]: I0219 03:27:26.966376 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:26.966765 master-0 kubenswrapper[33867]: I0219 03:27:26.966586 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log" (OuterVolumeSpecName: "audit-log") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:27:26.969483 master-0 kubenswrapper[33867]: I0219 03:27:26.969388 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg" (OuterVolumeSpecName: "kube-api-access-pn4dg") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "kube-api-access-pn4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:26.969483 master-0 kubenswrapper[33867]: I0219 03:27:26.969463 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:26.970151 master-0 kubenswrapper[33867]: I0219 03:27:26.970077 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:26.970642 master-0 kubenswrapper[33867]: I0219 03:27:26.970603 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "22370ccf-c383-4c1e-96f2-b5c61bb0cebe" (UID: "22370ccf-c383-4c1e-96f2-b5c61bb0cebe"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077768 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn4dg\" (UniqueName: \"kubernetes.io/projected/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-kube-api-access-pn4dg\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077813 33867 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077825 33867 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077846 33867 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077858 33867 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077872 33867 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.078391 master-0 kubenswrapper[33867]: I0219 03:27:27.077911 33867 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22370ccf-c383-4c1e-96f2-b5c61bb0cebe-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.229343 master-0 kubenswrapper[33867]: I0219 03:27:27.229241 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:27:27.232165 master-0 kubenswrapper[33867]: I0219 03:27:27.232115 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-68d9f4c46b-mh59n"] Feb 19 03:27:27.279551 master-0 kubenswrapper[33867]: I0219 03:27:27.279473 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:27.280466 master-0 kubenswrapper[33867]: I0219 03:27:27.280424 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="prometheus" containerID="cri-o://940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" gracePeriod=600 Feb 19 03:27:27.280569 master-0 kubenswrapper[33867]: I0219 03:27:27.280469 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy" containerID="cri-o://ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" gracePeriod=600 Feb 19 03:27:27.280644 master-0 kubenswrapper[33867]: I0219 03:27:27.280615 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-thanos" containerID="cri-o://52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" gracePeriod=600 Feb 19 03:27:27.280708 master-0 kubenswrapper[33867]: I0219 03:27:27.280627 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="thanos-sidecar" containerID="cri-o://1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" gracePeriod=600 Feb 19 03:27:27.280708 master-0 kubenswrapper[33867]: I0219 03:27:27.280700 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="config-reloader" containerID="cri-o://ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" gracePeriod=600 Feb 19 03:27:27.280918 master-0 kubenswrapper[33867]: I0219 03:27:27.280772 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-web" containerID="cri-o://78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" gracePeriod=600 Feb 19 03:27:27.783034 master-0 kubenswrapper[33867]: I0219 03:27:27.782962 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:27.892697 master-0 kubenswrapper[33867]: I0219 03:27:27.892658 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892715 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892756 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892808 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892850 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892868 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gc2q\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892893 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.892984 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.893025 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.893056 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893082 master-0 kubenswrapper[33867]: I0219 03:27:27.893080 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893101 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893120 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893159 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893182 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893209 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893227 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.893409 master-0 kubenswrapper[33867]: I0219 03:27:27.893280 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls\") pod \"67a1a372-6b54-4903-a7de-cce85bd4c904\" (UID: \"67a1a372-6b54-4903-a7de-cce85bd4c904\") " Feb 19 03:27:27.896359 master-0 kubenswrapper[33867]: I0219 03:27:27.896190 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:27.896907 master-0 kubenswrapper[33867]: I0219 03:27:27.896413 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.896907 master-0 kubenswrapper[33867]: I0219 03:27:27.896443 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q" (OuterVolumeSpecName: "kube-api-access-8gc2q") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "kube-api-access-8gc2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:27.896907 master-0 kubenswrapper[33867]: I0219 03:27:27.896635 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:27.897073 master-0 kubenswrapper[33867]: I0219 03:27:27.896922 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:27:27.897073 master-0 kubenswrapper[33867]: I0219 03:27:27.897015 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:27.898274 master-0 kubenswrapper[33867]: I0219 03:27:27.898227 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out" (OuterVolumeSpecName: "config-out") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:27:27.898274 master-0 kubenswrapper[33867]: I0219 03:27:27.898227 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.899349 master-0 kubenswrapper[33867]: I0219 03:27:27.899013 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.899349 master-0 kubenswrapper[33867]: I0219 03:27:27.899059 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.899349 master-0 kubenswrapper[33867]: I0219 03:27:27.899165 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config" (OuterVolumeSpecName: "config") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.899349 master-0 kubenswrapper[33867]: I0219 03:27:27.899317 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.900497 master-0 kubenswrapper[33867]: I0219 03:27:27.899509 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.900497 master-0 kubenswrapper[33867]: I0219 03:27:27.899630 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:27.900497 master-0 kubenswrapper[33867]: I0219 03:27:27.899935 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:27.900497 master-0 kubenswrapper[33867]: I0219 03:27:27.900456 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:27.901101 master-0 kubenswrapper[33867]: I0219 03:27:27.901001 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.909670 master-0 kubenswrapper[33867]: I0219 03:27:27.909613 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"fe5226a9db0c7a00ccd3f752e05916ac627647a59e1ee5aed51d3a172783c77e"} Feb 19 03:27:27.909670 master-0 kubenswrapper[33867]: I0219 03:27:27.909672 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"6e929d043b3efc91817f9b628a6217cff573e754851c149f989b861b0b2d3464"} Feb 19 03:27:27.909837 master-0 kubenswrapper[33867]: I0219 03:27:27.909685 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"0472697dd8d93e3ecda65b4c3d842a2b347deb5db714bd89c74814f35095c024"} Feb 19 03:27:27.909837 master-0 kubenswrapper[33867]: I0219 03:27:27.909697 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"451ec4cd0d338f0dfff29b6c65460fb693cc3fcfdc1647fbf708188ada643762"} Feb 19 03:27:27.909837 master-0 kubenswrapper[33867]: I0219 03:27:27.909709 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"3c086600f0e86820943a2924048fdcafa56b0832d0254707d0ff4927ff8fe859"} Feb 19 03:27:27.909837 master-0 kubenswrapper[33867]: I0219 03:27:27.909719 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f575aff7-687b-4fd9-8d50-22cee2314277","Type":"ContainerStarted","Data":"009bc218681883e9f354c59e2d0126cb4cbb85ed427e7b7543527f13f62cea34"} Feb 19 03:27:27.913633 master-0 kubenswrapper[33867]: I0219 03:27:27.913582 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" exitCode=0 Feb 19 03:27:27.913633 master-0 kubenswrapper[33867]: I0219 03:27:27.913628 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" exitCode=0 Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913639 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" exitCode=0 Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913651 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" exitCode=0 Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913666 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" exitCode=0 Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913676 33867 generic.go:334] "Generic (PLEG): container finished" podID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" exitCode=0 Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913643 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913715 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} Feb 19 03:27:27.913734 master-0 kubenswrapper[33867]: I0219 03:27:27.913731 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913751 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913739 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913781 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913765 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913880 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} Feb 19 03:27:27.914007 master-0 kubenswrapper[33867]: I0219 03:27:27.913896 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"67a1a372-6b54-4903-a7de-cce85bd4c904","Type":"ContainerDied","Data":"266e24246c059d07473e58e23e2e87821a0feae386cac298b824a0fa5596f7d8"} Feb 19 03:27:27.926314 master-0 kubenswrapper[33867]: I0219 03:27:27.926280 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:27.945076 master-0 kubenswrapper[33867]: I0219 03:27:27.941239 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.941212885 podStartE2EDuration="3.941212885s" podCreationTimestamp="2026-02-19 03:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:27.935377072 +0000 UTC m=+253.232047703" watchObservedRunningTime="2026-02-19 03:27:27.941212885 +0000 UTC m=+253.237883496" Feb 19 03:27:27.945076 master-0 kubenswrapper[33867]: I0219 03:27:27.944087 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:27.955483 master-0 kubenswrapper[33867]: I0219 03:27:27.955307 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config" (OuterVolumeSpecName: "web-config") pod "67a1a372-6b54-4903-a7de-cce85bd4c904" (UID: "67a1a372-6b54-4903-a7de-cce85bd4c904"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:27.979916 master-0 kubenswrapper[33867]: I0219 03:27:27.979341 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:27.994266 master-0 kubenswrapper[33867]: I0219 03:27:27.994206 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:27.995634 master-0 kubenswrapper[33867]: I0219 03:27:27.995518 33867 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-config-out\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995634 master-0 kubenswrapper[33867]: I0219 03:27:27.995543 33867 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995634 master-0 kubenswrapper[33867]: I0219 03:27:27.995555 33867 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995634 master-0 kubenswrapper[33867]: I0219 03:27:27.995571 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gc2q\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-kube-api-access-8gc2q\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995634 master-0 kubenswrapper[33867]: I0219 03:27:27.995613 33867 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995641 33867 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995658 33867 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995669 33867 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995679 33867 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995689 33867 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995704 33867 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995716 33867 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995728 33867 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/67a1a372-6b54-4903-a7de-cce85bd4c904-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995738 33867 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a1a372-6b54-4903-a7de-cce85bd4c904-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995748 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995757 33867 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995770 33867 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-web-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:27.995828 master-0 kubenswrapper[33867]: I0219 03:27:27.995780 33867 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/67a1a372-6b54-4903-a7de-cce85bd4c904-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:28.053355 master-0 kubenswrapper[33867]: I0219 03:27:28.053317 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.068165 master-0 kubenswrapper[33867]: I0219 03:27:28.068124 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.089583 master-0 kubenswrapper[33867]: I0219 03:27:28.089529 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.089988 master-0 kubenswrapper[33867]: E0219 03:27:28.089943 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.090051 master-0 kubenswrapper[33867]: I0219 03:27:28.089997 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.090051 master-0 kubenswrapper[33867]: I0219 03:27:28.090036 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.090531 master-0 kubenswrapper[33867]: E0219 03:27:28.090500 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.090582 master-0 kubenswrapper[33867]: I0219 03:27:28.090539 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.090582 master-0 kubenswrapper[33867]: I0219 03:27:28.090565 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.090928 master-0 kubenswrapper[33867]: E0219 03:27:28.090874 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.090983 master-0 kubenswrapper[33867]: I0219 03:27:28.090922 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.090983 master-0 kubenswrapper[33867]: I0219 03:27:28.090948 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.091333 master-0 kubenswrapper[33867]: E0219 03:27:28.091302 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.091426 master-0 kubenswrapper[33867]: I0219 03:27:28.091342 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.091426 master-0 kubenswrapper[33867]: I0219 03:27:28.091379 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.091794 master-0 kubenswrapper[33867]: E0219 03:27:28.091758 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.091863 master-0 kubenswrapper[33867]: I0219 03:27:28.091804 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.091863 master-0 kubenswrapper[33867]: I0219 03:27:28.091831 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.092183 master-0 kubenswrapper[33867]: E0219 03:27:28.092152 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.092242 master-0 kubenswrapper[33867]: I0219 03:27:28.092190 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.092242 master-0 kubenswrapper[33867]: I0219 03:27:28.092214 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.092653 master-0 kubenswrapper[33867]: E0219 03:27:28.092618 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.092725 master-0 kubenswrapper[33867]: I0219 03:27:28.092661 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.092725 master-0 kubenswrapper[33867]: I0219 03:27:28.092709 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.093542 master-0 kubenswrapper[33867]: I0219 03:27:28.093511 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.093617 master-0 kubenswrapper[33867]: I0219 03:27:28.093544 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.093870 master-0 kubenswrapper[33867]: I0219 03:27:28.093838 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.093932 master-0 kubenswrapper[33867]: I0219 03:27:28.093875 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.094173 master-0 kubenswrapper[33867]: I0219 03:27:28.094141 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.094269 master-0 kubenswrapper[33867]: I0219 03:27:28.094174 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.094628 master-0 kubenswrapper[33867]: I0219 03:27:28.094586 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.094696 master-0 kubenswrapper[33867]: I0219 03:27:28.094631 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.095019 master-0 kubenswrapper[33867]: I0219 03:27:28.094989 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.095073 master-0 kubenswrapper[33867]: I0219 03:27:28.095020 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.095379 master-0 kubenswrapper[33867]: I0219 03:27:28.095345 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.095434 master-0 kubenswrapper[33867]: I0219 03:27:28.095379 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.095833 master-0 kubenswrapper[33867]: I0219 03:27:28.095774 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.095833 master-0 kubenswrapper[33867]: I0219 03:27:28.095808 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.096176 master-0 kubenswrapper[33867]: I0219 03:27:28.096140 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.096242 master-0 kubenswrapper[33867]: I0219 03:27:28.096173 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.096567 master-0 kubenswrapper[33867]: I0219 03:27:28.096534 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.096625 master-0 kubenswrapper[33867]: I0219 03:27:28.096571 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.097034 master-0 kubenswrapper[33867]: I0219 03:27:28.096840 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.097034 master-0 kubenswrapper[33867]: I0219 03:27:28.096877 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.097395 master-0 kubenswrapper[33867]: I0219 03:27:28.097361 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.097395 master-0 kubenswrapper[33867]: I0219 03:27:28.097392 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.097724 master-0 kubenswrapper[33867]: I0219 03:27:28.097691 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.097802 master-0 kubenswrapper[33867]: I0219 03:27:28.097722 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.098108 master-0 kubenswrapper[33867]: I0219 03:27:28.098063 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.098171 master-0 kubenswrapper[33867]: I0219 03:27:28.098110 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.098506 master-0 kubenswrapper[33867]: I0219 03:27:28.098468 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.098506 master-0 kubenswrapper[33867]: I0219 03:27:28.098503 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.098802 master-0 kubenswrapper[33867]: I0219 03:27:28.098772 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.098854 master-0 kubenswrapper[33867]: I0219 03:27:28.098802 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.099444 master-0 kubenswrapper[33867]: I0219 03:27:28.099398 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.099444 master-0 kubenswrapper[33867]: I0219 03:27:28.099441 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.100477 master-0 kubenswrapper[33867]: I0219 03:27:28.100429 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.100477 master-0 kubenswrapper[33867]: I0219 03:27:28.100468 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.101412 master-0 kubenswrapper[33867]: I0219 03:27:28.100954 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.101412 master-0 kubenswrapper[33867]: I0219 03:27:28.101016 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.101412 master-0 kubenswrapper[33867]: I0219 03:27:28.101362 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.101412 master-0 kubenswrapper[33867]: I0219 03:27:28.101394 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.101785 master-0 kubenswrapper[33867]: I0219 03:27:28.101629 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.101785 master-0 kubenswrapper[33867]: I0219 03:27:28.101660 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.101909 master-0 kubenswrapper[33867]: I0219 03:27:28.101866 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.101909 master-0 kubenswrapper[33867]: I0219 03:27:28.101897 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.102125 master-0 kubenswrapper[33867]: I0219 03:27:28.102087 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.102125 master-0 kubenswrapper[33867]: I0219 03:27:28.102120 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.102406 master-0 kubenswrapper[33867]: I0219 03:27:28.102353 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.102406 master-0 kubenswrapper[33867]: I0219 03:27:28.102387 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.102661 master-0 kubenswrapper[33867]: I0219 03:27:28.102633 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.102787 master-0 kubenswrapper[33867]: I0219 03:27:28.102668 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.102929 master-0 kubenswrapper[33867]: I0219 03:27:28.102883 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.102929 master-0 kubenswrapper[33867]: I0219 03:27:28.102915 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.103298 master-0 kubenswrapper[33867]: I0219 03:27:28.103208 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.103298 master-0 kubenswrapper[33867]: I0219 03:27:28.103280 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.103548 master-0 kubenswrapper[33867]: I0219 03:27:28.103511 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.103548 master-0 kubenswrapper[33867]: I0219 03:27:28.103543 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.103801 master-0 kubenswrapper[33867]: I0219 03:27:28.103765 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.103801 master-0 kubenswrapper[33867]: I0219 03:27:28.103797 33867 scope.go:117] "RemoveContainer" containerID="52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee" Feb 19 03:27:28.104044 master-0 kubenswrapper[33867]: I0219 03:27:28.104006 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee"} err="failed to get container status \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": rpc error: code = NotFound desc = could not find container \"52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee\": container with ID starting with 52c0147aae70f65108010aa5dd79d1985bfdaa73f9d30e4e969ae341ac6729ee not found: ID does not exist" Feb 19 03:27:28.104044 master-0 kubenswrapper[33867]: I0219 03:27:28.104039 33867 scope.go:117] "RemoveContainer" containerID="ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe" Feb 19 03:27:28.104296 master-0 kubenswrapper[33867]: I0219 03:27:28.104236 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe"} err="failed to get container status \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": rpc error: code = NotFound desc = could not find container \"ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe\": container with ID starting with ef5d549bc6d650eb63c1e5693c41416811f42c49bf3ee1e78c197ca5cb79eebe not found: ID does not exist" Feb 19 03:27:28.104296 master-0 kubenswrapper[33867]: I0219 03:27:28.104292 33867 scope.go:117] "RemoveContainer" containerID="78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0" Feb 19 03:27:28.104534 master-0 kubenswrapper[33867]: I0219 03:27:28.104498 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0"} err="failed to get container status \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": rpc error: code = NotFound desc = could not find container \"78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0\": container with ID starting with 78531a1c4caabc702d2cb9672be610de9fbf00d55101a24b48201cf13627f4e0 not found: ID does not exist" Feb 19 03:27:28.104534 master-0 kubenswrapper[33867]: I0219 03:27:28.104530 33867 scope.go:117] "RemoveContainer" containerID="1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c" Feb 19 03:27:28.104799 master-0 kubenswrapper[33867]: I0219 03:27:28.104741 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c"} err="failed to get container status \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": rpc error: code = NotFound desc = could not find container \"1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c\": container with ID starting with 1ceb1bfca80b8a9367867f634545e316d87a20302600abd4c07e9f521aa7311c not found: ID does not exist" Feb 19 03:27:28.104799 master-0 kubenswrapper[33867]: I0219 03:27:28.104777 33867 scope.go:117] "RemoveContainer" containerID="ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe" Feb 19 03:27:28.105160 master-0 kubenswrapper[33867]: I0219 03:27:28.104984 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe"} err="failed to get container status \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": rpc error: code = NotFound desc = could not find container \"ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe\": container with ID starting with ff2475525eb909e673c6505dddc100abe05911204746331e7de27b26740060fe not found: ID does not exist" Feb 19 03:27:28.105160 master-0 kubenswrapper[33867]: I0219 03:27:28.105009 33867 scope.go:117] "RemoveContainer" containerID="940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb" Feb 19 03:27:28.105466 master-0 kubenswrapper[33867]: I0219 03:27:28.105199 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb"} err="failed to get container status \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": rpc error: code = NotFound desc = could not find container \"940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb\": container with ID starting with 940fa72642148d803ba8378fc46e4686a9529a61b05783d621bd8387883786fb not found: ID does not exist" Feb 19 03:27:28.105466 master-0 kubenswrapper[33867]: I0219 03:27:28.105223 33867 scope.go:117] "RemoveContainer" containerID="00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83" Feb 19 03:27:28.105594 master-0 kubenswrapper[33867]: I0219 03:27:28.105499 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83"} err="failed to get container status \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": rpc error: code = NotFound desc = could not find container \"00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83\": container with ID starting with 00f759bd22b151521c9a60368613bae7a9988e4bb0865e2b16b1661136a1fd83 not found: ID does not exist" Feb 19 03:27:28.283090 master-0 kubenswrapper[33867]: I0219 03:27:28.282935 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:28.288687 master-0 kubenswrapper[33867]: I0219 03:27:28.288635 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:28.333425 master-0 kubenswrapper[33867]: I0219 03:27:28.333225 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:28.333860 master-0 kubenswrapper[33867]: E0219 03:27:28.333817 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="thanos-sidecar" Feb 19 03:27:28.333860 master-0 kubenswrapper[33867]: I0219 03:27:28.333852 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="thanos-sidecar" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: E0219 03:27:28.333886 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: I0219 03:27:28.333905 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: E0219 03:27:28.333934 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-web" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: I0219 03:27:28.333950 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-web" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: E0219 03:27:28.333969 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" containerName="metrics-server" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: I0219 03:27:28.333984 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" containerName="metrics-server" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: E0219 03:27:28.334027 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="config-reloader" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: I0219 03:27:28.334043 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="config-reloader" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: E0219 03:27:28.334078 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="init-config-reloader" Feb 19 03:27:28.334107 master-0 kubenswrapper[33867]: I0219 03:27:28.334094 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="init-config-reloader" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: E0219 03:27:28.334127 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-thanos" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334144 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-thanos" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: E0219 03:27:28.334186 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="prometheus" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334201 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="prometheus" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334531 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" containerName="metrics-server" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334592 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="thanos-sidecar" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334645 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="prometheus" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334673 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-web" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334720 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy-thanos" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334741 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="kube-rbac-proxy" Feb 19 03:27:28.335084 master-0 kubenswrapper[33867]: I0219 03:27:28.334769 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" containerName="config-reloader" Feb 19 03:27:28.339572 master-0 kubenswrapper[33867]: I0219 03:27:28.339494 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.343203 master-0 kubenswrapper[33867]: I0219 03:27:28.343124 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 19 03:27:28.343395 master-0 kubenswrapper[33867]: I0219 03:27:28.343293 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 19 03:27:28.345243 master-0 kubenswrapper[33867]: I0219 03:27:28.345023 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-25h6f" Feb 19 03:27:28.345469 master-0 kubenswrapper[33867]: I0219 03:27:28.345415 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 19 03:27:28.345985 master-0 kubenswrapper[33867]: I0219 03:27:28.345943 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 19 03:27:28.346075 master-0 kubenswrapper[33867]: I0219 03:27:28.345980 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 19 03:27:28.346075 master-0 kubenswrapper[33867]: I0219 03:27:28.346033 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1e3s0akbul7uf" Feb 19 03:27:28.346277 master-0 kubenswrapper[33867]: I0219 03:27:28.346220 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 19 03:27:28.346503 master-0 kubenswrapper[33867]: I0219 03:27:28.346466 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 19 03:27:28.346709 master-0 kubenswrapper[33867]: I0219 03:27:28.346535 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 19 03:27:28.350417 master-0 kubenswrapper[33867]: I0219 03:27:28.350372 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 19 03:27:28.359981 master-0 kubenswrapper[33867]: I0219 03:27:28.358841 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 19 03:27:28.360726 master-0 kubenswrapper[33867]: I0219 03:27:28.360691 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 19 03:27:28.366504 master-0 kubenswrapper[33867]: I0219 03:27:28.366438 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:28.507472 master-0 kubenswrapper[33867]: I0219 03:27:28.507420 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507702 master-0 kubenswrapper[33867]: I0219 03:27:28.507554 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6t4z\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-kube-api-access-l6t4z\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507702 master-0 kubenswrapper[33867]: I0219 03:27:28.507611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507702 master-0 kubenswrapper[33867]: I0219 03:27:28.507651 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507834 master-0 kubenswrapper[33867]: I0219 03:27:28.507785 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507891 master-0 kubenswrapper[33867]: I0219 03:27:28.507834 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.507975 master-0 kubenswrapper[33867]: I0219 03:27:28.507934 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508048 master-0 kubenswrapper[33867]: I0219 03:27:28.507981 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508048 master-0 kubenswrapper[33867]: I0219 03:27:28.508015 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508048 master-0 kubenswrapper[33867]: I0219 03:27:28.508041 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508225 master-0 kubenswrapper[33867]: I0219 03:27:28.508099 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508225 master-0 kubenswrapper[33867]: I0219 03:27:28.508166 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508225 master-0 kubenswrapper[33867]: I0219 03:27:28.508200 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508455 master-0 kubenswrapper[33867]: I0219 03:27:28.508227 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508455 master-0 kubenswrapper[33867]: I0219 03:27:28.508311 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508455 master-0 kubenswrapper[33867]: I0219 03:27:28.508372 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508455 master-0 kubenswrapper[33867]: I0219 03:27:28.508394 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.508455 master-0 kubenswrapper[33867]: I0219 03:27:28.508424 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609510 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609572 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609628 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609659 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609692 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609753 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609790 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6t4z\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-kube-api-access-l6t4z\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609816 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609846 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609897 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609927 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609962 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.609984 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.610055 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.610087 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611174 master-0 kubenswrapper[33867]: I0219 03:27:28.610119 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611846 master-0 kubenswrapper[33867]: I0219 03:27:28.611657 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611846 master-0 kubenswrapper[33867]: I0219 03:27:28.611734 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611846 master-0 kubenswrapper[33867]: I0219 03:27:28.611777 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611846 master-0 kubenswrapper[33867]: I0219 03:27:28.611798 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.611846 master-0 kubenswrapper[33867]: I0219 03:27:28.611792 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.612778 master-0 kubenswrapper[33867]: I0219 03:27:28.612742 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.613558 master-0 kubenswrapper[33867]: I0219 03:27:28.613520 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.613805 master-0 kubenswrapper[33867]: I0219 03:27:28.613762 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.613944 master-0 kubenswrapper[33867]: I0219 03:27:28.613886 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.616646 master-0 kubenswrapper[33867]: I0219 03:27:28.615765 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.617038 master-0 kubenswrapper[33867]: I0219 03:27:28.616722 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.617038 master-0 kubenswrapper[33867]: I0219 03:27:28.616875 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.619605 master-0 kubenswrapper[33867]: I0219 03:27:28.617606 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.619605 master-0 kubenswrapper[33867]: I0219 03:27:28.618775 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b569743-a475-4bd4-aba2-c4d14f8b82f0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.619605 master-0 kubenswrapper[33867]: I0219 03:27:28.618913 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.621320 master-0 kubenswrapper[33867]: I0219 03:27:28.620410 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.621320 master-0 kubenswrapper[33867]: I0219 03:27:28.620497 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.621320 master-0 kubenswrapper[33867]: I0219 03:27:28.620814 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.622753 master-0 kubenswrapper[33867]: I0219 03:27:28.622720 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b569743-a475-4bd4-aba2-c4d14f8b82f0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.628130 master-0 kubenswrapper[33867]: I0219 03:27:28.628084 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6t4z\" (UniqueName: \"kubernetes.io/projected/9b569743-a475-4bd4-aba2-c4d14f8b82f0-kube-api-access-l6t4z\") pod \"prometheus-k8s-0\" (UID: \"9b569743-a475-4bd4-aba2-c4d14f8b82f0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.692856 master-0 kubenswrapper[33867]: I0219 03:27:28.692779 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:28.980828 master-0 kubenswrapper[33867]: I0219 03:27:28.980759 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22370ccf-c383-4c1e-96f2-b5c61bb0cebe" path="/var/lib/kubelet/pods/22370ccf-c383-4c1e-96f2-b5c61bb0cebe/volumes" Feb 19 03:27:28.985596 master-0 kubenswrapper[33867]: E0219 03:27:28.984598 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:28.988841 master-0 kubenswrapper[33867]: I0219 03:27:28.985646 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a1a372-6b54-4903-a7de-cce85bd4c904" path="/var/lib/kubelet/pods/67a1a372-6b54-4903-a7de-cce85bd4c904/volumes" Feb 19 03:27:29.059819 master-0 kubenswrapper[33867]: I0219 03:27:29.057012 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-586d7bfb96-dg45z"] Feb 19 03:27:29.097553 master-0 kubenswrapper[33867]: I0219 03:27:29.096671 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:27:29.097778 master-0 kubenswrapper[33867]: I0219 03:27:29.097713 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.121763 master-0 kubenswrapper[33867]: I0219 03:27:29.118635 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:27:29.172584 master-0 kubenswrapper[33867]: I0219 03:27:29.172455 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 03:27:29.179513 master-0 kubenswrapper[33867]: W0219 03:27:29.179439 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b569743_a475_4bd4_aba2_c4d14f8b82f0.slice/crio-fd9a66e3a6e92b9963597472d47bcdbfacdeaf1fe610e7e9592f4ab7e82a2bab WatchSource:0}: Error finding container fd9a66e3a6e92b9963597472d47bcdbfacdeaf1fe610e7e9592f4ab7e82a2bab: Status 404 returned error can't find the container with id fd9a66e3a6e92b9963597472d47bcdbfacdeaf1fe610e7e9592f4ab7e82a2bab Feb 19 03:27:29.221990 master-0 kubenswrapper[33867]: I0219 03:27:29.221940 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222146 master-0 kubenswrapper[33867]: I0219 03:27:29.222070 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222297 master-0 kubenswrapper[33867]: I0219 03:27:29.222252 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222346 master-0 kubenswrapper[33867]: I0219 03:27:29.222320 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222390 master-0 kubenswrapper[33867]: I0219 03:27:29.222381 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxdv\" (UniqueName: \"kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222458 master-0 kubenswrapper[33867]: I0219 03:27:29.222436 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.222598 master-0 kubenswrapper[33867]: I0219 03:27:29.222579 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325242 master-0 kubenswrapper[33867]: I0219 03:27:29.325148 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325422 master-0 kubenswrapper[33867]: I0219 03:27:29.325388 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325504 master-0 kubenswrapper[33867]: I0219 03:27:29.325450 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325504 master-0 kubenswrapper[33867]: I0219 03:27:29.325474 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325504 master-0 kubenswrapper[33867]: I0219 03:27:29.325504 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxdv\" (UniqueName: \"kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325696 master-0 kubenswrapper[33867]: I0219 03:27:29.325537 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.325696 master-0 kubenswrapper[33867]: I0219 03:27:29.325592 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.326638 master-0 kubenswrapper[33867]: I0219 03:27:29.326580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.327323 master-0 kubenswrapper[33867]: I0219 03:27:29.327275 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.327856 master-0 kubenswrapper[33867]: I0219 03:27:29.327725 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.327946 master-0 kubenswrapper[33867]: I0219 03:27:29.327901 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.332353 master-0 kubenswrapper[33867]: I0219 03:27:29.332318 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.334241 master-0 kubenswrapper[33867]: I0219 03:27:29.334165 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.353671 master-0 kubenswrapper[33867]: I0219 03:27:29.353615 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxdv\" (UniqueName: \"kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv\") pod \"console-64f8f69b7-bnncp\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.425510 master-0 kubenswrapper[33867]: I0219 03:27:29.425383 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:29.854766 master-0 kubenswrapper[33867]: I0219 03:27:29.854600 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:27:29.867496 master-0 kubenswrapper[33867]: W0219 03:27:29.867415 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c5b877_feea_49a3_b528_c24d46500a36.slice/crio-94aefd7b1ea0ac892a63a0725c225f1002534797f4efac47f9a65eb4865b86f8 WatchSource:0}: Error finding container 94aefd7b1ea0ac892a63a0725c225f1002534797f4efac47f9a65eb4865b86f8: Status 404 returned error can't find the container with id 94aefd7b1ea0ac892a63a0725c225f1002534797f4efac47f9a65eb4865b86f8 Feb 19 03:27:29.873560 master-0 kubenswrapper[33867]: I0219 03:27:29.873489 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:29.952585 master-0 kubenswrapper[33867]: I0219 03:27:29.952497 33867 generic.go:334] "Generic (PLEG): container finished" podID="9b569743-a475-4bd4-aba2-c4d14f8b82f0" containerID="e3dcd3b67e64e2d687fe0c202d8819b1ca5108973c53b1021a439842c4bf89b6" exitCode=0 Feb 19 03:27:29.952782 master-0 kubenswrapper[33867]: I0219 03:27:29.952591 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerDied","Data":"e3dcd3b67e64e2d687fe0c202d8819b1ca5108973c53b1021a439842c4bf89b6"} Feb 19 03:27:29.952782 master-0 kubenswrapper[33867]: I0219 03:27:29.952638 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"fd9a66e3a6e92b9963597472d47bcdbfacdeaf1fe610e7e9592f4ab7e82a2bab"} Feb 19 03:27:29.954366 master-0 kubenswrapper[33867]: I0219 03:27:29.954305 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f8f69b7-bnncp" event={"ID":"88c5b877-feea-49a3-b528-c24d46500a36","Type":"ContainerStarted","Data":"94aefd7b1ea0ac892a63a0725c225f1002534797f4efac47f9a65eb4865b86f8"} Feb 19 03:27:30.965485 master-0 kubenswrapper[33867]: I0219 03:27:30.965395 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f8f69b7-bnncp" event={"ID":"88c5b877-feea-49a3-b528-c24d46500a36","Type":"ContainerStarted","Data":"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023"} Feb 19 03:27:30.971400 master-0 kubenswrapper[33867]: I0219 03:27:30.971355 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"1759e861656f80989e6e5118192b5124761746f7f0f56e56552015417e7060a9"} Feb 19 03:27:30.971400 master-0 kubenswrapper[33867]: I0219 03:27:30.971395 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"542b7ccacbf8a08cd732a8e9e5ad5e6498f4ab1939a1d1c21473e2280d0b895a"} Feb 19 03:27:30.971400 master-0 kubenswrapper[33867]: I0219 03:27:30.971405 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"947975b2e84aa41e4842b491c8ea70ef6859760193e91b0c17d3f6905724f2ec"} Feb 19 03:27:30.971653 master-0 kubenswrapper[33867]: I0219 03:27:30.971415 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"ba28edd4d3f9f1b9d994e3e6728a740e24e13c045e0be37891ede463e2b4e7db"} Feb 19 03:27:30.971653 master-0 kubenswrapper[33867]: I0219 03:27:30.971426 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"9291b73284dc75d9fd96cca51ca77f9cb4e34fc02f2b098013d5753a04d3da00"} Feb 19 03:27:30.971653 master-0 kubenswrapper[33867]: I0219 03:27:30.971437 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b569743-a475-4bd4-aba2-c4d14f8b82f0","Type":"ContainerStarted","Data":"5bfe8c2e388a8cbf425828618e2731933939f5f8158e5045b3b165deabc75811"} Feb 19 03:27:30.993785 master-0 kubenswrapper[33867]: I0219 03:27:30.993637 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64f8f69b7-bnncp" podStartSLOduration=1.993582315 podStartE2EDuration="1.993582315s" podCreationTimestamp="2026-02-19 03:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:30.989046408 +0000 UTC m=+256.285717019" watchObservedRunningTime="2026-02-19 03:27:30.993582315 +0000 UTC m=+256.290253016" Feb 19 03:27:31.028245 master-0 kubenswrapper[33867]: I0219 03:27:31.028166 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.028143324 podStartE2EDuration="3.028143324s" podCreationTimestamp="2026-02-19 03:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:27:31.026619401 +0000 UTC m=+256.323290022" watchObservedRunningTime="2026-02-19 03:27:31.028143324 +0000 UTC m=+256.324813935" Feb 19 03:27:31.070515 master-0 kubenswrapper[33867]: I0219 03:27:31.070453 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:31.070766 master-0 kubenswrapper[33867]: I0219 03:27:31.070592 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:27:31.073148 master-0 kubenswrapper[33867]: I0219 03:27:31.073107 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:27:31.073208 master-0 kubenswrapper[33867]: I0219 03:27:31.073158 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:27:33.693712 master-0 kubenswrapper[33867]: I0219 03:27:33.693643 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:27:37.668602 master-0 kubenswrapper[33867]: E0219 03:27:37.667319 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:37.668602 master-0 kubenswrapper[33867]: E0219 03:27:37.667313 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:38.468772 master-0 kubenswrapper[33867]: E0219 03:27:38.468700 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:39.036228 master-0 kubenswrapper[33867]: E0219 03:27:39.036161 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89199d30_e6ec_4748_80d2_9edaf1b3dfc9.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:39.424759 master-0 kubenswrapper[33867]: I0219 03:27:39.424676 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:39.424759 master-0 kubenswrapper[33867]: I0219 03:27:39.424749 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:27:39.427369 master-0 kubenswrapper[33867]: I0219 03:27:39.427289 33867 patch_prober.go:28] interesting pod/console-64f8f69b7-bnncp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" start-of-body= Feb 19 03:27:39.427369 master-0 kubenswrapper[33867]: I0219 03:27:39.427342 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" probeResult="failure" output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" Feb 19 03:27:41.071570 master-0 kubenswrapper[33867]: I0219 03:27:41.071478 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:27:41.071570 master-0 kubenswrapper[33867]: I0219 03:27:41.071561 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:27:44.492097 master-0 kubenswrapper[33867]: I0219 03:27:44.491997 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-677f65b5df-p8qrj" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" containerID="cri-o://fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c" gracePeriod=15 Feb 19 03:27:44.517376 master-0 kubenswrapper[33867]: I0219 03:27:44.517328 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 19 03:27:44.517812 master-0 kubenswrapper[33867]: I0219 03:27:44.517730 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" containerID="cri-o://d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db" gracePeriod=15 Feb 19 03:27:44.517911 master-0 kubenswrapper[33867]: I0219 03:27:44.517812 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc" gracePeriod=15 Feb 19 03:27:44.517984 master-0 kubenswrapper[33867]: I0219 03:27:44.517918 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90" gracePeriod=15 Feb 19 03:27:44.518054 master-0 kubenswrapper[33867]: I0219 03:27:44.517830 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee" gracePeriod=15 Feb 19 03:27:44.518122 master-0 kubenswrapper[33867]: I0219 03:27:44.517918 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377" gracePeriod=15 Feb 19 03:27:44.521402 master-0 kubenswrapper[33867]: I0219 03:27:44.521005 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 19 03:27:44.521756 master-0 kubenswrapper[33867]: E0219 03:27:44.521720 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 19 03:27:44.521756 master-0 kubenswrapper[33867]: I0219 03:27:44.521744 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: E0219 03:27:44.521779 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: I0219 03:27:44.521791 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: E0219 03:27:44.521805 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: I0219 03:27:44.521813 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: E0219 03:27:44.521825 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: I0219 03:27:44.521837 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: E0219 03:27:44.521847 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 19 03:27:44.521863 master-0 kubenswrapper[33867]: I0219 03:27:44.521856 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: E0219 03:27:44.521884 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: I0219 03:27:44.521892 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: I0219 03:27:44.522047 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: I0219 03:27:44.522063 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: I0219 03:27:44.522077 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.522427 master-0 kubenswrapper[33867]: I0219 03:27:44.522097 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 19 03:27:44.522694 master-0 kubenswrapper[33867]: I0219 03:27:44.522127 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 19 03:27:44.525822 master-0 kubenswrapper[33867]: I0219 03:27:44.525767 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 03:27:44.526311 master-0 kubenswrapper[33867]: E0219 03:27:44.526178 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.526311 master-0 kubenswrapper[33867]: I0219 03:27:44.526201 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 19 03:27:44.529245 master-0 kubenswrapper[33867]: I0219 03:27:44.528885 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 19 03:27:44.530194 master-0 kubenswrapper[33867]: I0219 03:27:44.530144 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.538842 master-0 kubenswrapper[33867]: I0219 03:27:44.538771 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="eb342c942d3d92fd08ed7cf68fafb94c" podUID="57aa038311da35c3e4d00e227853e6b4" Feb 19 03:27:44.676483 master-0 kubenswrapper[33867]: E0219 03:27:44.676428 33867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712172 master-0 kubenswrapper[33867]: I0219 03:27:44.712076 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712172 master-0 kubenswrapper[33867]: I0219 03:27:44.712131 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.712172 master-0 kubenswrapper[33867]: I0219 03:27:44.712166 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.712172 master-0 kubenswrapper[33867]: I0219 03:27:44.712204 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712680 master-0 kubenswrapper[33867]: I0219 03:27:44.712303 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712680 master-0 kubenswrapper[33867]: I0219 03:27:44.712332 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712680 master-0 kubenswrapper[33867]: I0219 03:27:44.712373 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.712680 master-0 kubenswrapper[33867]: I0219 03:27:44.712426 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.728355 master-0 kubenswrapper[33867]: E0219 03:27:44.726727 33867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:44.728355 master-0 kubenswrapper[33867]: E0219 03:27:44.727311 33867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:44.728355 master-0 kubenswrapper[33867]: E0219 03:27:44.727807 33867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:44.728355 master-0 kubenswrapper[33867]: E0219 03:27:44.728333 33867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:44.728855 master-0 kubenswrapper[33867]: E0219 03:27:44.728814 33867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:44.728904 master-0 kubenswrapper[33867]: I0219 03:27:44.728853 33867 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 03:27:44.729347 master-0 kubenswrapper[33867]: E0219 03:27:44.729304 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 19 03:27:44.814386 master-0 kubenswrapper[33867]: I0219 03:27:44.814200 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.814386 master-0 kubenswrapper[33867]: I0219 03:27:44.814250 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.814386 master-0 kubenswrapper[33867]: I0219 03:27:44.814304 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.814386 master-0 kubenswrapper[33867]: I0219 03:27:44.814364 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.814386 master-0 kubenswrapper[33867]: I0219 03:27:44.814385 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814441 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814468 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814478 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814509 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814515 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814533 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814527 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814585 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814593 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814610 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57aa038311da35c3e4d00e227853e6b4-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"57aa038311da35c3e4d00e227853e6b4\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:44.815066 master-0 kubenswrapper[33867]: I0219 03:27:44.814625 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:44.931027 master-0 kubenswrapper[33867]: E0219 03:27:44.930938 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 19 03:27:44.977813 master-0 kubenswrapper[33867]: I0219 03:27:44.977690 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:45.025376 master-0 kubenswrapper[33867]: I0219 03:27:45.025330 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-677f65b5df-p8qrj_e376877b-f5c6-4a73-a959-cde9c466252a/console/0.log" Feb 19 03:27:45.025542 master-0 kubenswrapper[33867]: I0219 03:27:45.025397 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:27:45.026652 master-0 kubenswrapper[33867]: I0219 03:27:45.026596 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.031490 master-0 kubenswrapper[33867]: W0219 03:27:45.031413 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f9e181fcd823e864851fdb74fd8d37.slice/crio-4442928d0a2f849ef075b5b237fcb02fba382bcbd25129bc9650ff90f0689c27 WatchSource:0}: Error finding container 4442928d0a2f849ef075b5b237fcb02fba382bcbd25129bc9650ff90f0689c27: Status 404 returned error can't find the container with id 4442928d0a2f849ef075b5b237fcb02fba382bcbd25129bc9650ff90f0689c27 Feb 19 03:27:45.036510 master-0 kubenswrapper[33867]: E0219 03:27:45.036323 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.18958817fc19b917 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:62f9e181fcd823e864851fdb74fd8d37,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:27:45.035327767 +0000 UTC m=+270.331998418,LastTimestamp:2026-02-19 03:27:45.035327767 +0000 UTC m=+270.331998418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:27:45.099508 master-0 kubenswrapper[33867]: I0219 03:27:45.099463 33867 generic.go:334] "Generic (PLEG): container finished" podID="a7adce7b-f079-455e-8377-84c40cfc2557" containerID="fb7cb4ae99e8de98e0d3080008a103708808bdb27e92225dfed5168dfffc810f" exitCode=0 Feb 19 03:27:45.099649 master-0 kubenswrapper[33867]: I0219 03:27:45.099527 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"a7adce7b-f079-455e-8377-84c40cfc2557","Type":"ContainerDied","Data":"fb7cb4ae99e8de98e0d3080008a103708808bdb27e92225dfed5168dfffc810f"} Feb 19 03:27:45.101096 master-0 kubenswrapper[33867]: I0219 03:27:45.101056 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.101897 master-0 kubenswrapper[33867]: I0219 03:27:45.101841 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.102017 master-0 kubenswrapper[33867]: I0219 03:27:45.101896 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-677f65b5df-p8qrj_e376877b-f5c6-4a73-a959-cde9c466252a/console/0.log" Feb 19 03:27:45.102165 master-0 kubenswrapper[33867]: I0219 03:27:45.102142 33867 generic.go:334] "Generic (PLEG): container finished" podID="e376877b-f5c6-4a73-a959-cde9c466252a" containerID="fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c" exitCode=2 Feb 19 03:27:45.102356 master-0 kubenswrapper[33867]: I0219 03:27:45.102175 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-677f65b5df-p8qrj" Feb 19 03:27:45.102468 master-0 kubenswrapper[33867]: I0219 03:27:45.102188 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-677f65b5df-p8qrj" event={"ID":"e376877b-f5c6-4a73-a959-cde9c466252a","Type":"ContainerDied","Data":"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c"} Feb 19 03:27:45.102583 master-0 kubenswrapper[33867]: I0219 03:27:45.102535 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-677f65b5df-p8qrj" event={"ID":"e376877b-f5c6-4a73-a959-cde9c466252a","Type":"ContainerDied","Data":"a0089720bb00eccd65042b4f592ae5d2fdd2d08c6dfab13c05bbca8f8764d382"} Feb 19 03:27:45.102658 master-0 kubenswrapper[33867]: I0219 03:27:45.102609 33867 scope.go:117] "RemoveContainer" containerID="fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c" Feb 19 03:27:45.103839 master-0 kubenswrapper[33867]: I0219 03:27:45.103711 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.105136 master-0 kubenswrapper[33867]: I0219 03:27:45.105088 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.105203 master-0 kubenswrapper[33867]: I0219 03:27:45.105145 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"62f9e181fcd823e864851fdb74fd8d37","Type":"ContainerStarted","Data":"4442928d0a2f849ef075b5b237fcb02fba382bcbd25129bc9650ff90f0689c27"} Feb 19 03:27:45.108695 master-0 kubenswrapper[33867]: I0219 03:27:45.108656 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 19 03:27:45.112075 master-0 kubenswrapper[33867]: I0219 03:27:45.112029 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 19 03:27:45.113300 master-0 kubenswrapper[33867]: I0219 03:27:45.113194 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc" exitCode=0 Feb 19 03:27:45.113300 master-0 kubenswrapper[33867]: I0219 03:27:45.113230 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90" exitCode=0 Feb 19 03:27:45.113300 master-0 kubenswrapper[33867]: I0219 03:27:45.113242 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee" exitCode=0 Feb 19 03:27:45.113300 master-0 kubenswrapper[33867]: I0219 03:27:45.113255 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377" exitCode=2 Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.118944 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.118995 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119042 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119192 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119291 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lnmx\" (UniqueName: \"kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119384 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119424 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config\") pod \"e376877b-f5c6-4a73-a959-cde9c466252a\" (UID: \"e376877b-f5c6-4a73-a959-cde9c466252a\") " Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119670 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config" (OuterVolumeSpecName: "console-config") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:45.120298 master-0 kubenswrapper[33867]: I0219 03:27:45.119681 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca" (OuterVolumeSpecName: "service-ca") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:45.120585 master-0 kubenswrapper[33867]: I0219 03:27:45.120450 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:45.120622 master-0 kubenswrapper[33867]: I0219 03:27:45.120591 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:45.121077 master-0 kubenswrapper[33867]: I0219 03:27:45.121041 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.121181 master-0 kubenswrapper[33867]: I0219 03:27:45.121165 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.121254 master-0 kubenswrapper[33867]: I0219 03:27:45.121239 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.121341 master-0 kubenswrapper[33867]: I0219 03:27:45.121328 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e376877b-f5c6-4a73-a959-cde9c466252a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.122507 master-0 kubenswrapper[33867]: I0219 03:27:45.122464 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:45.123719 master-0 kubenswrapper[33867]: I0219 03:27:45.123672 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:45.124947 master-0 kubenswrapper[33867]: I0219 03:27:45.124889 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx" (OuterVolumeSpecName: "kube-api-access-9lnmx") pod "e376877b-f5c6-4a73-a959-cde9c466252a" (UID: "e376877b-f5c6-4a73-a959-cde9c466252a"). InnerVolumeSpecName "kube-api-access-9lnmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:45.133807 master-0 kubenswrapper[33867]: I0219 03:27:45.133764 33867 scope.go:117] "RemoveContainer" containerID="fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c" Feb 19 03:27:45.134884 master-0 kubenswrapper[33867]: E0219 03:27:45.134671 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c\": container with ID starting with fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c not found: ID does not exist" containerID="fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c" Feb 19 03:27:45.134884 master-0 kubenswrapper[33867]: I0219 03:27:45.134720 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c"} err="failed to get container status \"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c\": rpc error: code = NotFound desc = could not find container \"fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c\": container with ID starting with fc9a43a2a247e831682868defb57716a93ab4a1310d8566dbf28223104b48c5c not found: ID does not exist" Feb 19 03:27:45.134884 master-0 kubenswrapper[33867]: I0219 03:27:45.134746 33867 scope.go:117] "RemoveContainer" containerID="70954c340299c804b789bfe49633d92c735fcd40dd36aa25a4a746ddc654f917" Feb 19 03:27:45.222918 master-0 kubenswrapper[33867]: I0219 03:27:45.222810 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lnmx\" (UniqueName: \"kubernetes.io/projected/e376877b-f5c6-4a73-a959-cde9c466252a-kube-api-access-9lnmx\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.222918 master-0 kubenswrapper[33867]: I0219 03:27:45.222884 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.222918 master-0 kubenswrapper[33867]: I0219 03:27:45.222895 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e376877b-f5c6-4a73-a959-cde9c466252a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:45.332689 master-0 kubenswrapper[33867]: E0219 03:27:45.332616 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 19 03:27:45.424024 master-0 kubenswrapper[33867]: I0219 03:27:45.423946 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.424808 master-0 kubenswrapper[33867]: I0219 03:27:45.424756 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:45.727047 master-0 kubenswrapper[33867]: I0219 03:27:45.726795 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b9ffbb744-xzn8r" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" containerID="cri-o://1507f3301a489c41c7f28d7a3a64ce252dad3d07f1f5f8d438e4f999db94eda9" gracePeriod=15 Feb 19 03:27:46.134059 master-0 kubenswrapper[33867]: E0219 03:27:46.133988 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 19 03:27:46.135215 master-0 kubenswrapper[33867]: I0219 03:27:46.135181 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 19 03:27:46.138866 master-0 kubenswrapper[33867]: I0219 03:27:46.138822 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9ffbb744-xzn8r_a34af636-294e-431e-b676-6d059a537a5b/console/0.log" Feb 19 03:27:46.138942 master-0 kubenswrapper[33867]: I0219 03:27:46.138865 33867 generic.go:334] "Generic (PLEG): container finished" podID="a34af636-294e-431e-b676-6d059a537a5b" containerID="1507f3301a489c41c7f28d7a3a64ce252dad3d07f1f5f8d438e4f999db94eda9" exitCode=2 Feb 19 03:27:46.138942 master-0 kubenswrapper[33867]: I0219 03:27:46.138924 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9ffbb744-xzn8r" event={"ID":"a34af636-294e-431e-b676-6d059a537a5b","Type":"ContainerDied","Data":"1507f3301a489c41c7f28d7a3a64ce252dad3d07f1f5f8d438e4f999db94eda9"} Feb 19 03:27:46.141994 master-0 kubenswrapper[33867]: I0219 03:27:46.141732 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"62f9e181fcd823e864851fdb74fd8d37","Type":"ContainerStarted","Data":"a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18"} Feb 19 03:27:46.143539 master-0 kubenswrapper[33867]: E0219 03:27:46.142734 33867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:46.143539 master-0 kubenswrapper[33867]: I0219 03:27:46.142958 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.143737 master-0 kubenswrapper[33867]: I0219 03:27:46.143662 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.331271 master-0 kubenswrapper[33867]: I0219 03:27:46.331198 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9ffbb744-xzn8r_a34af636-294e-431e-b676-6d059a537a5b/console/0.log" Feb 19 03:27:46.331477 master-0 kubenswrapper[33867]: I0219 03:27:46.331294 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:27:46.332286 master-0 kubenswrapper[33867]: I0219 03:27:46.332226 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.332864 master-0 kubenswrapper[33867]: I0219 03:27:46.332806 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.333315 master-0 kubenswrapper[33867]: I0219 03:27:46.333276 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.444824 master-0 kubenswrapper[33867]: I0219 03:27:46.444657 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445116 master-0 kubenswrapper[33867]: I0219 03:27:46.444824 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp8bb\" (UniqueName: \"kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445116 master-0 kubenswrapper[33867]: I0219 03:27:46.444932 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445116 master-0 kubenswrapper[33867]: I0219 03:27:46.445037 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445508 master-0 kubenswrapper[33867]: I0219 03:27:46.445195 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445508 master-0 kubenswrapper[33867]: I0219 03:27:46.445304 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445508 master-0 kubenswrapper[33867]: I0219 03:27:46.445368 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config\") pod \"a34af636-294e-431e-b676-6d059a537a5b\" (UID: \"a34af636-294e-431e-b676-6d059a537a5b\") " Feb 19 03:27:46.445967 master-0 kubenswrapper[33867]: I0219 03:27:46.445514 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca" (OuterVolumeSpecName: "service-ca") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:46.446073 master-0 kubenswrapper[33867]: I0219 03:27:46.446036 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config" (OuterVolumeSpecName: "console-config") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:46.446329 master-0 kubenswrapper[33867]: I0219 03:27:46.446173 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:46.446676 master-0 kubenswrapper[33867]: I0219 03:27:46.446591 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:46.446812 master-0 kubenswrapper[33867]: I0219 03:27:46.446774 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.446890 master-0 kubenswrapper[33867]: I0219 03:27:46.446812 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.446890 master-0 kubenswrapper[33867]: I0219 03:27:46.446828 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.451728 master-0 kubenswrapper[33867]: I0219 03:27:46.451670 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:46.452759 master-0 kubenswrapper[33867]: I0219 03:27:46.452706 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb" (OuterVolumeSpecName: "kube-api-access-kp8bb") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "kube-api-access-kp8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:46.456961 master-0 kubenswrapper[33867]: I0219 03:27:46.456893 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a34af636-294e-431e-b676-6d059a537a5b" (UID: "a34af636-294e-431e-b676-6d059a537a5b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:46.488111 master-0 kubenswrapper[33867]: I0219 03:27:46.488053 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:46.489234 master-0 kubenswrapper[33867]: I0219 03:27:46.489154 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.489816 master-0 kubenswrapper[33867]: I0219 03:27:46.489765 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.490381 master-0 kubenswrapper[33867]: I0219 03:27:46.490330 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.548802 master-0 kubenswrapper[33867]: I0219 03:27:46.548651 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.548802 master-0 kubenswrapper[33867]: I0219 03:27:46.548727 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a34af636-294e-431e-b676-6d059a537a5b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.548802 master-0 kubenswrapper[33867]: I0219 03:27:46.548750 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp8bb\" (UniqueName: \"kubernetes.io/projected/a34af636-294e-431e-b676-6d059a537a5b-kube-api-access-kp8bb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.548802 master-0 kubenswrapper[33867]: I0219 03:27:46.548771 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34af636-294e-431e-b676-6d059a537a5b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.650861 master-0 kubenswrapper[33867]: I0219 03:27:46.650818 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access\") pod \"a7adce7b-f079-455e-8377-84c40cfc2557\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " Feb 19 03:27:46.651358 master-0 kubenswrapper[33867]: I0219 03:27:46.651334 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir\") pod \"a7adce7b-f079-455e-8377-84c40cfc2557\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " Feb 19 03:27:46.651467 master-0 kubenswrapper[33867]: I0219 03:27:46.651450 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock\") pod \"a7adce7b-f079-455e-8377-84c40cfc2557\" (UID: \"a7adce7b-f079-455e-8377-84c40cfc2557\") " Feb 19 03:27:46.654340 master-0 kubenswrapper[33867]: I0219 03:27:46.652410 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock" (OuterVolumeSpecName: "var-lock") pod "a7adce7b-f079-455e-8377-84c40cfc2557" (UID: "a7adce7b-f079-455e-8377-84c40cfc2557"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:46.654340 master-0 kubenswrapper[33867]: I0219 03:27:46.652646 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7adce7b-f079-455e-8377-84c40cfc2557" (UID: "a7adce7b-f079-455e-8377-84c40cfc2557"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:46.656357 master-0 kubenswrapper[33867]: I0219 03:27:46.656298 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7adce7b-f079-455e-8377-84c40cfc2557" (UID: "a7adce7b-f079-455e-8377-84c40cfc2557"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:46.758423 master-0 kubenswrapper[33867]: I0219 03:27:46.757012 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.758423 master-0 kubenswrapper[33867]: I0219 03:27:46.757051 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a7adce7b-f079-455e-8377-84c40cfc2557-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.758423 master-0 kubenswrapper[33867]: I0219 03:27:46.757061 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7adce7b-f079-455e-8377-84c40cfc2557-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:46.924322 master-0 kubenswrapper[33867]: I0219 03:27:46.924243 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 19 03:27:46.925063 master-0 kubenswrapper[33867]: I0219 03:27:46.925038 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:46.926400 master-0 kubenswrapper[33867]: I0219 03:27:46.926353 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.927774 master-0 kubenswrapper[33867]: I0219 03:27:46.927730 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.928591 master-0 kubenswrapper[33867]: I0219 03:27:46.928490 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:46.929452 master-0 kubenswrapper[33867]: I0219 03:27:46.929395 33867 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060075 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060146 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060225 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060248 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060363 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 19 03:27:47.060551 master-0 kubenswrapper[33867]: I0219 03:27:47.060443 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:27:47.061459 master-0 kubenswrapper[33867]: I0219 03:27:47.061395 33867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:47.061459 master-0 kubenswrapper[33867]: I0219 03:27:47.061443 33867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:47.061459 master-0 kubenswrapper[33867]: I0219 03:27:47.061454 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:47.153776 master-0 kubenswrapper[33867]: I0219 03:27:47.153678 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"a7adce7b-f079-455e-8377-84c40cfc2557","Type":"ContainerDied","Data":"aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3"} Feb 19 03:27:47.153776 master-0 kubenswrapper[33867]: I0219 03:27:47.153740 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 19 03:27:47.153776 master-0 kubenswrapper[33867]: I0219 03:27:47.153769 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa9b7635b978d087c321dbe9c855a3ee684411dc0cb5c0bc375d13682ec26ab3" Feb 19 03:27:47.157467 master-0 kubenswrapper[33867]: I0219 03:27:47.157407 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9ffbb744-xzn8r_a34af636-294e-431e-b676-6d059a537a5b/console/0.log" Feb 19 03:27:47.157671 master-0 kubenswrapper[33867]: I0219 03:27:47.157612 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9ffbb744-xzn8r" event={"ID":"a34af636-294e-431e-b676-6d059a537a5b","Type":"ContainerDied","Data":"99cbd10267dd864d9c718d3c2ab7213cd2b03aa0ddcfe5b7a47cc10995b035b7"} Feb 19 03:27:47.157735 master-0 kubenswrapper[33867]: I0219 03:27:47.157704 33867 scope.go:117] "RemoveContainer" containerID="1507f3301a489c41c7f28d7a3a64ce252dad3d07f1f5f8d438e4f999db94eda9" Feb 19 03:27:47.158019 master-0 kubenswrapper[33867]: I0219 03:27:47.157980 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9ffbb744-xzn8r" Feb 19 03:27:47.160222 master-0 kubenswrapper[33867]: I0219 03:27:47.159810 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.161050 master-0 kubenswrapper[33867]: I0219 03:27:47.160981 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.161858 master-0 kubenswrapper[33867]: I0219 03:27:47.161789 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.164565 master-0 kubenswrapper[33867]: I0219 03:27:47.164491 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.165203 master-0 kubenswrapper[33867]: I0219 03:27:47.165104 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.165424 master-0 kubenswrapper[33867]: I0219 03:27:47.165368 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 19 03:27:47.166136 master-0 kubenswrapper[33867]: I0219 03:27:47.166046 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.166454 master-0 kubenswrapper[33867]: I0219 03:27:47.166389 33867 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db" exitCode=0 Feb 19 03:27:47.167335 master-0 kubenswrapper[33867]: I0219 03:27:47.167232 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:47.168036 master-0 kubenswrapper[33867]: I0219 03:27:47.167938 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.170115 master-0 kubenswrapper[33867]: E0219 03:27:47.168044 33867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:27:47.170115 master-0 kubenswrapper[33867]: I0219 03:27:47.168743 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.170115 master-0 kubenswrapper[33867]: I0219 03:27:47.169510 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.170478 master-0 kubenswrapper[33867]: I0219 03:27:47.170199 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.171022 master-0 kubenswrapper[33867]: I0219 03:27:47.170937 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.171790 master-0 kubenswrapper[33867]: I0219 03:27:47.171710 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.172705 master-0 kubenswrapper[33867]: I0219 03:27:47.172620 33867 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.181074 master-0 kubenswrapper[33867]: I0219 03:27:47.180997 33867 scope.go:117] "RemoveContainer" containerID="8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc" Feb 19 03:27:47.211821 master-0 kubenswrapper[33867]: I0219 03:27:47.211773 33867 scope.go:117] "RemoveContainer" containerID="d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90" Feb 19 03:27:47.212380 master-0 kubenswrapper[33867]: I0219 03:27:47.212303 33867 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.213156 master-0 kubenswrapper[33867]: I0219 03:27:47.213078 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.214128 master-0 kubenswrapper[33867]: I0219 03:27:47.214058 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.214925 master-0 kubenswrapper[33867]: I0219 03:27:47.214884 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:47.238874 master-0 kubenswrapper[33867]: I0219 03:27:47.238585 33867 scope.go:117] "RemoveContainer" containerID="3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee" Feb 19 03:27:47.268044 master-0 kubenswrapper[33867]: I0219 03:27:47.267983 33867 scope.go:117] "RemoveContainer" containerID="a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377" Feb 19 03:27:47.294215 master-0 kubenswrapper[33867]: I0219 03:27:47.294168 33867 scope.go:117] "RemoveContainer" containerID="d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db" Feb 19 03:27:47.318207 master-0 kubenswrapper[33867]: I0219 03:27:47.318155 33867 scope.go:117] "RemoveContainer" containerID="edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74" Feb 19 03:27:47.342842 master-0 kubenswrapper[33867]: I0219 03:27:47.342796 33867 scope.go:117] "RemoveContainer" containerID="8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc" Feb 19 03:27:47.343783 master-0 kubenswrapper[33867]: E0219 03:27:47.343736 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc\": container with ID starting with 8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc not found: ID does not exist" containerID="8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc" Feb 19 03:27:47.343783 master-0 kubenswrapper[33867]: I0219 03:27:47.343771 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc"} err="failed to get container status \"8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc\": rpc error: code = NotFound desc = could not find container \"8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc\": container with ID starting with 8703305994e5e6d83062a62db97c2fcda0d4ff159136fdde8033d84325f2adfc not found: ID does not exist" Feb 19 03:27:47.343921 master-0 kubenswrapper[33867]: I0219 03:27:47.343794 33867 scope.go:117] "RemoveContainer" containerID="d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90" Feb 19 03:27:47.344272 master-0 kubenswrapper[33867]: E0219 03:27:47.344179 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90\": container with ID starting with d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90 not found: ID does not exist" containerID="d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90" Feb 19 03:27:47.344272 master-0 kubenswrapper[33867]: I0219 03:27:47.344228 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90"} err="failed to get container status \"d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90\": rpc error: code = NotFound desc = could not find container \"d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90\": container with ID starting with d765d90eae9c40f50ece03da5e0479e768eabd8e018b5a8081c61db9a332ab90 not found: ID does not exist" Feb 19 03:27:47.344447 master-0 kubenswrapper[33867]: I0219 03:27:47.344284 33867 scope.go:117] "RemoveContainer" containerID="3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee" Feb 19 03:27:47.344681 master-0 kubenswrapper[33867]: E0219 03:27:47.344632 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee\": container with ID starting with 3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee not found: ID does not exist" containerID="3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee" Feb 19 03:27:47.344681 master-0 kubenswrapper[33867]: I0219 03:27:47.344665 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee"} err="failed to get container status \"3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee\": rpc error: code = NotFound desc = could not find container \"3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee\": container with ID starting with 3e1b1b438b2231d83740b05b4b7c4c8feb5380e408f80d3438fef2a36f14d8ee not found: ID does not exist" Feb 19 03:27:47.344681 master-0 kubenswrapper[33867]: I0219 03:27:47.344685 33867 scope.go:117] "RemoveContainer" containerID="a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377" Feb 19 03:27:47.344953 master-0 kubenswrapper[33867]: E0219 03:27:47.344922 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377\": container with ID starting with a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377 not found: ID does not exist" containerID="a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377" Feb 19 03:27:47.345065 master-0 kubenswrapper[33867]: I0219 03:27:47.344952 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377"} err="failed to get container status \"a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377\": rpc error: code = NotFound desc = could not find container \"a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377\": container with ID starting with a7270764b1707e61cd9e99fbe6485595f95fdb30c421771ea524ee8478e63377 not found: ID does not exist" Feb 19 03:27:47.345065 master-0 kubenswrapper[33867]: I0219 03:27:47.344973 33867 scope.go:117] "RemoveContainer" containerID="d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db" Feb 19 03:27:47.345202 master-0 kubenswrapper[33867]: E0219 03:27:47.345178 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db\": container with ID starting with d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db not found: ID does not exist" containerID="d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db" Feb 19 03:27:47.345303 master-0 kubenswrapper[33867]: I0219 03:27:47.345203 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db"} err="failed to get container status \"d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db\": rpc error: code = NotFound desc = could not find container \"d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db\": container with ID starting with d9aee46054caaef5ef291e654284136f56cf456d2cdc61900ca9b4e94b0cd8db not found: ID does not exist" Feb 19 03:27:47.345303 master-0 kubenswrapper[33867]: I0219 03:27:47.345220 33867 scope.go:117] "RemoveContainer" containerID="edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74" Feb 19 03:27:47.345603 master-0 kubenswrapper[33867]: E0219 03:27:47.345567 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74\": container with ID starting with edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74 not found: ID does not exist" containerID="edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74" Feb 19 03:27:47.345707 master-0 kubenswrapper[33867]: I0219 03:27:47.345603 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74"} err="failed to get container status \"edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74\": rpc error: code = NotFound desc = could not find container \"edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74\": container with ID starting with edc67203236c02efd3daf4962a8ba633ec7c743b0e9ac65a2ab3310f74106f74 not found: ID does not exist" Feb 19 03:27:47.735526 master-0 kubenswrapper[33867]: E0219 03:27:47.735387 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 19 03:27:48.970097 master-0 kubenswrapper[33867]: I0219 03:27:48.970012 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb342c942d3d92fd08ed7cf68fafb94c" path="/var/lib/kubelet/pods/eb342c942d3d92fd08ed7cf68fafb94c/volumes" Feb 19 03:27:49.425137 master-0 kubenswrapper[33867]: I0219 03:27:49.425050 33867 patch_prober.go:28] interesting pod/console-64f8f69b7-bnncp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" start-of-body= Feb 19 03:27:49.425382 master-0 kubenswrapper[33867]: I0219 03:27:49.425140 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" probeResult="failure" output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" Feb 19 03:27:50.936825 master-0 kubenswrapper[33867]: E0219 03:27:50.936724 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 19 03:27:51.071340 master-0 kubenswrapper[33867]: I0219 03:27:51.071249 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:27:51.071340 master-0 kubenswrapper[33867]: I0219 03:27:51.071353 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:27:51.336915 master-0 kubenswrapper[33867]: E0219 03:27:51.336641 33867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.18958817fc19b917 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:62f9e181fcd823e864851fdb74fd8d37,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-19 03:27:45.035327767 +0000 UTC m=+270.331998418,LastTimestamp:2026-02-19 03:27:45.035327767 +0000 UTC m=+270.331998418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 19 03:27:54.103388 master-0 kubenswrapper[33867]: I0219 03:27:54.103301 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-586d7bfb96-dg45z" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" containerName="console" containerID="cri-o://87b6062a0c7f765f7173431f0d930f2e9ea39c02af2a56f8c2be9c07403ac211" gracePeriod=15 Feb 19 03:27:54.233304 master-0 kubenswrapper[33867]: I0219 03:27:54.233248 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-586d7bfb96-dg45z_224edf60-62d9-4e76-b1d7-6e6b92e8ad00/console/0.log" Feb 19 03:27:54.233430 master-0 kubenswrapper[33867]: I0219 03:27:54.233333 33867 generic.go:334] "Generic (PLEG): container finished" podID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" containerID="87b6062a0c7f765f7173431f0d930f2e9ea39c02af2a56f8c2be9c07403ac211" exitCode=2 Feb 19 03:27:54.233430 master-0 kubenswrapper[33867]: I0219 03:27:54.233375 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-586d7bfb96-dg45z" event={"ID":"224edf60-62d9-4e76-b1d7-6e6b92e8ad00","Type":"ContainerDied","Data":"87b6062a0c7f765f7173431f0d930f2e9ea39c02af2a56f8c2be9c07403ac211"} Feb 19 03:27:54.660083 master-0 kubenswrapper[33867]: I0219 03:27:54.660041 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-586d7bfb96-dg45z_224edf60-62d9-4e76-b1d7-6e6b92e8ad00/console/0.log" Feb 19 03:27:54.660305 master-0 kubenswrapper[33867]: I0219 03:27:54.660123 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:54.661197 master-0 kubenswrapper[33867]: I0219 03:27:54.661108 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.662144 master-0 kubenswrapper[33867]: I0219 03:27:54.662051 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.663071 master-0 kubenswrapper[33867]: I0219 03:27:54.662793 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.663444 master-0 kubenswrapper[33867]: I0219 03:27:54.663403 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.751041 master-0 kubenswrapper[33867]: I0219 03:27:54.750977 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751041 master-0 kubenswrapper[33867]: I0219 03:27:54.751041 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxvpc\" (UniqueName: \"kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751420 master-0 kubenswrapper[33867]: I0219 03:27:54.751110 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751420 master-0 kubenswrapper[33867]: I0219 03:27:54.751145 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751420 master-0 kubenswrapper[33867]: I0219 03:27:54.751226 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751420 master-0 kubenswrapper[33867]: I0219 03:27:54.751294 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.751657 master-0 kubenswrapper[33867]: I0219 03:27:54.751562 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:54.751657 master-0 kubenswrapper[33867]: I0219 03:27:54.751610 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca\") pod \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\" (UID: \"224edf60-62d9-4e76-b1d7-6e6b92e8ad00\") " Feb 19 03:27:54.752658 master-0 kubenswrapper[33867]: I0219 03:27:54.751913 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.752658 master-0 kubenswrapper[33867]: I0219 03:27:54.752411 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca" (OuterVolumeSpecName: "service-ca") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:54.752658 master-0 kubenswrapper[33867]: I0219 03:27:54.752468 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:54.752658 master-0 kubenswrapper[33867]: I0219 03:27:54.752603 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config" (OuterVolumeSpecName: "console-config") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:27:54.754244 master-0 kubenswrapper[33867]: I0219 03:27:54.754178 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:54.754625 master-0 kubenswrapper[33867]: I0219 03:27:54.754589 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:27:54.754803 master-0 kubenswrapper[33867]: I0219 03:27:54.754709 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc" (OuterVolumeSpecName: "kube-api-access-nxvpc") pod "224edf60-62d9-4e76-b1d7-6e6b92e8ad00" (UID: "224edf60-62d9-4e76-b1d7-6e6b92e8ad00"). InnerVolumeSpecName "kube-api-access-nxvpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853113 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853184 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853198 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853208 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853220 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxvpc\" (UniqueName: \"kubernetes.io/projected/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-kube-api-access-nxvpc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.853293 master-0 kubenswrapper[33867]: I0219 03:27:54.853231 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/224edf60-62d9-4e76-b1d7-6e6b92e8ad00-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:27:54.961244 master-0 kubenswrapper[33867]: I0219 03:27:54.961175 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.962122 master-0 kubenswrapper[33867]: I0219 03:27:54.962000 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.963214 master-0 kubenswrapper[33867]: I0219 03:27:54.963151 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:54.963885 master-0 kubenswrapper[33867]: I0219 03:27:54.963825 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.243340 master-0 kubenswrapper[33867]: I0219 03:27:55.243268 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-586d7bfb96-dg45z_224edf60-62d9-4e76-b1d7-6e6b92e8ad00/console/0.log" Feb 19 03:27:55.243887 master-0 kubenswrapper[33867]: I0219 03:27:55.243376 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-586d7bfb96-dg45z" event={"ID":"224edf60-62d9-4e76-b1d7-6e6b92e8ad00","Type":"ContainerDied","Data":"5067c2b4ce99fee2e084e11a565d79b3b118cdecdc797d9e6a756ad9acf58d13"} Feb 19 03:27:55.243887 master-0 kubenswrapper[33867]: I0219 03:27:55.243448 33867 scope.go:117] "RemoveContainer" containerID="87b6062a0c7f765f7173431f0d930f2e9ea39c02af2a56f8c2be9c07403ac211" Feb 19 03:27:55.243887 master-0 kubenswrapper[33867]: I0219 03:27:55.243481 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-586d7bfb96-dg45z" Feb 19 03:27:55.245237 master-0 kubenswrapper[33867]: I0219 03:27:55.244808 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.245594 master-0 kubenswrapper[33867]: I0219 03:27:55.245525 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.246973 master-0 kubenswrapper[33867]: I0219 03:27:55.246044 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.248534 master-0 kubenswrapper[33867]: I0219 03:27:55.247151 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.248712 master-0 kubenswrapper[33867]: I0219 03:27:55.248621 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.249799 master-0 kubenswrapper[33867]: I0219 03:27:55.249506 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.251987 master-0 kubenswrapper[33867]: I0219 03:27:55.251404 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:55.252944 master-0 kubenswrapper[33867]: I0219 03:27:55.252903 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:57.338813 master-0 kubenswrapper[33867]: E0219 03:27:57.338720 33867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 19 03:27:58.277842 master-0 kubenswrapper[33867]: I0219 03:27:58.277755 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/0.log" Feb 19 03:27:58.278153 master-0 kubenswrapper[33867]: I0219 03:27:58.277858 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="b72fc1a1be5f58b5d59ac3d6f6c214e3a5a59e2746f4da0b54694b182f52c426" exitCode=1 Feb 19 03:27:58.278153 master-0 kubenswrapper[33867]: I0219 03:27:58.277923 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerDied","Data":"b72fc1a1be5f58b5d59ac3d6f6c214e3a5a59e2746f4da0b54694b182f52c426"} Feb 19 03:27:58.278768 master-0 kubenswrapper[33867]: I0219 03:27:58.278729 33867 scope.go:117] "RemoveContainer" containerID="b72fc1a1be5f58b5d59ac3d6f6c214e3a5a59e2746f4da0b54694b182f52c426" Feb 19 03:27:58.280003 master-0 kubenswrapper[33867]: I0219 03:27:58.279691 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.280453 master-0 kubenswrapper[33867]: I0219 03:27:58.280406 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.281250 master-0 kubenswrapper[33867]: I0219 03:27:58.281198 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.282151 master-0 kubenswrapper[33867]: I0219 03:27:58.282113 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.283183 master-0 kubenswrapper[33867]: I0219 03:27:58.283040 33867 status_manager.go:851] "Failed to get status for pod" podUID="54d93c932fb6b580283b25f4adc52bd3" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.955218 master-0 kubenswrapper[33867]: I0219 03:27:58.955088 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:58.956950 master-0 kubenswrapper[33867]: I0219 03:27:58.956886 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.957679 master-0 kubenswrapper[33867]: I0219 03:27:58.957590 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.958407 master-0 kubenswrapper[33867]: I0219 03:27:58.958351 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.959041 master-0 kubenswrapper[33867]: I0219 03:27:58.958983 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.959732 master-0 kubenswrapper[33867]: I0219 03:27:58.959678 33867 status_manager.go:851] "Failed to get status for pod" podUID="54d93c932fb6b580283b25f4adc52bd3" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:58.980565 master-0 kubenswrapper[33867]: I0219 03:27:58.980513 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:27:58.980565 master-0 kubenswrapper[33867]: I0219 03:27:58.980558 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:27:58.981754 master-0 kubenswrapper[33867]: E0219 03:27:58.981702 33867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:58.982567 master-0 kubenswrapper[33867]: I0219 03:27:58.982540 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:59.009092 master-0 kubenswrapper[33867]: W0219 03:27:59.009025 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa038311da35c3e4d00e227853e6b4.slice/crio-165c816c0c1ca13d7ad6e75a0a63602e9dade051e9614e2edf0e73931338e7b7 WatchSource:0}: Error finding container 165c816c0c1ca13d7ad6e75a0a63602e9dade051e9614e2edf0e73931338e7b7: Status 404 returned error can't find the container with id 165c816c0c1ca13d7ad6e75a0a63602e9dade051e9614e2edf0e73931338e7b7 Feb 19 03:27:59.305029 master-0 kubenswrapper[33867]: I0219 03:27:59.304951 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/0.log" Feb 19 03:27:59.305415 master-0 kubenswrapper[33867]: I0219 03:27:59.305139 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062"} Feb 19 03:27:59.307702 master-0 kubenswrapper[33867]: I0219 03:27:59.307613 33867 status_manager.go:851] "Failed to get status for pod" podUID="54d93c932fb6b580283b25f4adc52bd3" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.308981 master-0 kubenswrapper[33867]: I0219 03:27:59.308875 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.309712 master-0 kubenswrapper[33867]: I0219 03:27:59.309648 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"165c816c0c1ca13d7ad6e75a0a63602e9dade051e9614e2edf0e73931338e7b7"} Feb 19 03:27:59.310091 master-0 kubenswrapper[33867]: I0219 03:27:59.310057 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:27:59.310091 master-0 kubenswrapper[33867]: I0219 03:27:59.310083 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:27:59.310904 master-0 kubenswrapper[33867]: I0219 03:27:59.310857 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.310977 master-0 kubenswrapper[33867]: E0219 03:27:59.310910 33867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:27:59.311414 master-0 kubenswrapper[33867]: I0219 03:27:59.311378 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.311794 master-0 kubenswrapper[33867]: I0219 03:27:59.311754 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.312546 master-0 kubenswrapper[33867]: I0219 03:27:59.312489 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.313245 master-0 kubenswrapper[33867]: I0219 03:27:59.313200 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.314771 master-0 kubenswrapper[33867]: I0219 03:27:59.314729 33867 status_manager.go:851] "Failed to get status for pod" podUID="54d93c932fb6b580283b25f4adc52bd3" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.315345 master-0 kubenswrapper[33867]: I0219 03:27:59.315299 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.315877 master-0 kubenswrapper[33867]: I0219 03:27:59.315830 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.425400 master-0 kubenswrapper[33867]: I0219 03:27:59.425296 33867 patch_prober.go:28] interesting pod/console-64f8f69b7-bnncp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" start-of-body= Feb 19 03:27:59.425747 master-0 kubenswrapper[33867]: I0219 03:27:59.425439 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" probeResult="failure" output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" Feb 19 03:27:59.459080 master-0 kubenswrapper[33867]: E0219 03:27:59.458967 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa038311da35c3e4d00e227853e6b4.slice/crio-conmon-610f248e30e082c9fcfbcdbcfae38eed05b9432c500894727b0ce3ace6cf4983.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa038311da35c3e4d00e227853e6b4.slice/crio-610f248e30e082c9fcfbcdbcfae38eed05b9432c500894727b0ce3ace6cf4983.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:27:59.809940 master-0 kubenswrapper[33867]: E0219 03:27:59.809866 33867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:27:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:27:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:27:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T03:27:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.811189 master-0 kubenswrapper[33867]: E0219 03:27:59.811156 33867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.812149 master-0 kubenswrapper[33867]: E0219 03:27:59.812116 33867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.812924 master-0 kubenswrapper[33867]: E0219 03:27:59.812888 33867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.813823 master-0 kubenswrapper[33867]: E0219 03:27:59.813771 33867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:27:59.813823 master-0 kubenswrapper[33867]: E0219 03:27:59.813811 33867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 03:28:00.323441 master-0 kubenswrapper[33867]: I0219 03:28:00.323153 33867 generic.go:334] "Generic (PLEG): container finished" podID="57aa038311da35c3e4d00e227853e6b4" containerID="610f248e30e082c9fcfbcdbcfae38eed05b9432c500894727b0ce3ace6cf4983" exitCode=0 Feb 19 03:28:00.323441 master-0 kubenswrapper[33867]: I0219 03:28:00.323211 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerDied","Data":"610f248e30e082c9fcfbcdbcfae38eed05b9432c500894727b0ce3ace6cf4983"} Feb 19 03:28:00.324396 master-0 kubenswrapper[33867]: I0219 03:28:00.323549 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:00.324396 master-0 kubenswrapper[33867]: I0219 03:28:00.323567 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:00.324838 master-0 kubenswrapper[33867]: I0219 03:28:00.324780 33867 status_manager.go:851] "Failed to get status for pod" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" pod="openshift-console/console-586d7bfb96-dg45z" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-586d7bfb96-dg45z\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:28:00.324838 master-0 kubenswrapper[33867]: E0219 03:28:00.324778 33867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:00.327049 master-0 kubenswrapper[33867]: I0219 03:28:00.326184 33867 status_manager.go:851] "Failed to get status for pod" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" pod="openshift-console/console-677f65b5df-p8qrj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-677f65b5df-p8qrj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:28:00.327955 master-0 kubenswrapper[33867]: I0219 03:28:00.327363 33867 status_manager.go:851] "Failed to get status for pod" podUID="a34af636-294e-431e-b676-6d059a537a5b" pod="openshift-console/console-6b9ffbb744-xzn8r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b9ffbb744-xzn8r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:28:00.328198 master-0 kubenswrapper[33867]: I0219 03:28:00.328127 33867 status_manager.go:851] "Failed to get status for pod" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:28:00.329076 master-0 kubenswrapper[33867]: I0219 03:28:00.328988 33867 status_manager.go:851] "Failed to get status for pod" podUID="54d93c932fb6b580283b25f4adc52bd3" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 19 03:28:01.072062 master-0 kubenswrapper[33867]: I0219 03:28:01.071982 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:28:01.072340 master-0 kubenswrapper[33867]: I0219 03:28:01.072070 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:28:01.337914 master-0 kubenswrapper[33867]: I0219 03:28:01.337828 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"2f3ee4229bbeb98b4b8edbb4d30dba2161bb9bd03534aa93ee1923ccb64f6a40"} Feb 19 03:28:01.337914 master-0 kubenswrapper[33867]: I0219 03:28:01.337894 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"742c05404f3abcb905ee66622b97ab28a62387369b84189bf80d9c5df2b355f7"} Feb 19 03:28:01.337914 master-0 kubenswrapper[33867]: I0219 03:28:01.337905 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"f15876678f2c51bf28e2d614c1b19a6c31d76db4943c3d41e026893d971882fd"} Feb 19 03:28:02.349421 master-0 kubenswrapper[33867]: I0219 03:28:02.349354 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"6af3c6f0948efb2f74a49400f231dca310147ebb745dea14df4e2591940bcfb9"} Feb 19 03:28:02.350205 master-0 kubenswrapper[33867]: I0219 03:28:02.350181 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:02.350343 master-0 kubenswrapper[33867]: I0219 03:28:02.350326 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"57aa038311da35c3e4d00e227853e6b4","Type":"ContainerStarted","Data":"90da8af1a010c7c41010e3a0e7f068937d2956609899fc6ed4f1b4dd67a80eb9"} Feb 19 03:28:02.350449 master-0 kubenswrapper[33867]: I0219 03:28:02.349814 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:02.350540 master-0 kubenswrapper[33867]: I0219 03:28:02.350521 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:03.983722 master-0 kubenswrapper[33867]: I0219 03:28:03.982840 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:03.983722 master-0 kubenswrapper[33867]: I0219 03:28:03.982921 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:03.991991 master-0 kubenswrapper[33867]: I0219 03:28:03.991913 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:05.036452 master-0 kubenswrapper[33867]: I0219 03:28:05.036306 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:28:05.036452 master-0 kubenswrapper[33867]: I0219 03:28:05.036409 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:28:05.037639 master-0 kubenswrapper[33867]: I0219 03:28:05.037529 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:28:05.037639 master-0 kubenswrapper[33867]: I0219 03:28:05.037616 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:28:07.364642 master-0 kubenswrapper[33867]: I0219 03:28:07.364583 33867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:07.412486 master-0 kubenswrapper[33867]: I0219 03:28:07.412427 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:07.412486 master-0 kubenswrapper[33867]: I0219 03:28:07.412476 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:07.417027 master-0 kubenswrapper[33867]: I0219 03:28:07.416982 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:07.493020 master-0 kubenswrapper[33867]: I0219 03:28:07.492954 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="57aa038311da35c3e4d00e227853e6b4" podUID="d57f7f05-2465-43d2-80f5-c19815bdd5a0" Feb 19 03:28:08.419706 master-0 kubenswrapper[33867]: I0219 03:28:08.419620 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:08.419706 master-0 kubenswrapper[33867]: I0219 03:28:08.419677 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="169a5148-2fa6-4b0c-94c7-27f518c7115e" Feb 19 03:28:08.424559 master-0 kubenswrapper[33867]: I0219 03:28:08.424515 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="57aa038311da35c3e4d00e227853e6b4" podUID="d57f7f05-2465-43d2-80f5-c19815bdd5a0" Feb 19 03:28:09.425698 master-0 kubenswrapper[33867]: I0219 03:28:09.425585 33867 patch_prober.go:28] interesting pod/console-64f8f69b7-bnncp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" start-of-body= Feb 19 03:28:09.425698 master-0 kubenswrapper[33867]: I0219 03:28:09.425683 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" probeResult="failure" output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" Feb 19 03:28:11.072252 master-0 kubenswrapper[33867]: I0219 03:28:11.072133 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:28:11.072252 master-0 kubenswrapper[33867]: I0219 03:28:11.072232 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:28:14.933198 master-0 kubenswrapper[33867]: I0219 03:28:14.933137 33867 kubelet.go:1505] "Image garbage collection succeeded" Feb 19 03:28:15.036832 master-0 kubenswrapper[33867]: I0219 03:28:15.036750 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:28:15.036832 master-0 kubenswrapper[33867]: I0219 03:28:15.036833 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:28:16.488785 master-0 kubenswrapper[33867]: I0219 03:28:16.488698 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 03:28:16.988620 master-0 kubenswrapper[33867]: I0219 03:28:16.988531 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 03:28:17.017632 master-0 kubenswrapper[33867]: I0219 03:28:17.017517 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 03:28:17.244099 master-0 kubenswrapper[33867]: I0219 03:28:17.243994 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 03:28:17.493795 master-0 kubenswrapper[33867]: I0219 03:28:17.493725 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-x7jvh" Feb 19 03:28:17.552115 master-0 kubenswrapper[33867]: I0219 03:28:17.552037 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 03:28:17.638386 master-0 kubenswrapper[33867]: I0219 03:28:17.638322 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 03:28:17.878513 master-0 kubenswrapper[33867]: I0219 03:28:17.878438 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 03:28:18.305709 master-0 kubenswrapper[33867]: I0219 03:28:18.305501 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 03:28:18.467788 master-0 kubenswrapper[33867]: I0219 03:28:18.467694 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 03:28:18.639903 master-0 kubenswrapper[33867]: I0219 03:28:18.639822 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 03:28:18.738389 master-0 kubenswrapper[33867]: I0219 03:28:18.738310 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 03:28:18.935961 master-0 kubenswrapper[33867]: I0219 03:28:18.935759 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 03:28:19.191743 master-0 kubenswrapper[33867]: I0219 03:28:19.191547 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 03:28:19.351582 master-0 kubenswrapper[33867]: I0219 03:28:19.351500 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 19 03:28:19.386072 master-0 kubenswrapper[33867]: I0219 03:28:19.365845 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 03:28:19.414857 master-0 kubenswrapper[33867]: I0219 03:28:19.414772 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 19 03:28:19.425368 master-0 kubenswrapper[33867]: I0219 03:28:19.425281 33867 patch_prober.go:28] interesting pod/console-64f8f69b7-bnncp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" start-of-body= Feb 19 03:28:19.425560 master-0 kubenswrapper[33867]: I0219 03:28:19.425392 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" probeResult="failure" output="Get \"https://10.128.0.114:8443/health\": dial tcp 10.128.0.114:8443: connect: connection refused" Feb 19 03:28:19.473988 master-0 kubenswrapper[33867]: I0219 03:28:19.473876 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 03:28:19.569990 master-0 kubenswrapper[33867]: I0219 03:28:19.569917 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 19 03:28:19.617551 master-0 kubenswrapper[33867]: I0219 03:28:19.617503 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 03:28:19.641682 master-0 kubenswrapper[33867]: I0219 03:28:19.641599 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-njtfp" Feb 19 03:28:19.710443 master-0 kubenswrapper[33867]: I0219 03:28:19.710391 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:28:19.774451 master-0 kubenswrapper[33867]: I0219 03:28:19.774220 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 19 03:28:19.820219 master-0 kubenswrapper[33867]: I0219 03:28:19.820147 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 03:28:19.863391 master-0 kubenswrapper[33867]: I0219 03:28:19.863336 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7rwgg" Feb 19 03:28:19.905435 master-0 kubenswrapper[33867]: I0219 03:28:19.905379 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 03:28:19.936649 master-0 kubenswrapper[33867]: I0219 03:28:19.936577 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-rvbfx" Feb 19 03:28:19.967710 master-0 kubenswrapper[33867]: I0219 03:28:19.967616 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 19 03:28:20.123946 master-0 kubenswrapper[33867]: I0219 03:28:20.120846 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 03:28:20.158712 master-0 kubenswrapper[33867]: I0219 03:28:20.158197 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 19 03:28:20.182323 master-0 kubenswrapper[33867]: I0219 03:28:20.182237 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 19 03:28:20.303662 master-0 kubenswrapper[33867]: I0219 03:28:20.303587 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 19 03:28:20.392149 master-0 kubenswrapper[33867]: I0219 03:28:20.392052 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 03:28:20.549786 master-0 kubenswrapper[33867]: I0219 03:28:20.549677 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 03:28:20.582982 master-0 kubenswrapper[33867]: I0219 03:28:20.582872 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-kpjkc" Feb 19 03:28:20.686967 master-0 kubenswrapper[33867]: I0219 03:28:20.686830 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 19 03:28:20.698874 master-0 kubenswrapper[33867]: I0219 03:28:20.698819 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 19 03:28:20.717756 master-0 kubenswrapper[33867]: I0219 03:28:20.717699 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 19 03:28:20.718082 master-0 kubenswrapper[33867]: I0219 03:28:20.718051 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 03:28:20.718309 master-0 kubenswrapper[33867]: I0219 03:28:20.718264 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 03:28:21.011249 master-0 kubenswrapper[33867]: I0219 03:28:21.011150 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 03:28:21.016946 master-0 kubenswrapper[33867]: I0219 03:28:21.016893 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 19 03:28:21.023549 master-0 kubenswrapper[33867]: I0219 03:28:21.023517 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 03:28:21.071419 master-0 kubenswrapper[33867]: I0219 03:28:21.071328 33867 patch_prober.go:28] interesting pod/console-84d59b44c5-nczqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" start-of-body= Feb 19 03:28:21.071419 master-0 kubenswrapper[33867]: I0219 03:28:21.071399 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.111:8443/health\": dial tcp 10.128.0.111:8443: connect: connection refused" Feb 19 03:28:21.126019 master-0 kubenswrapper[33867]: I0219 03:28:21.125972 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 03:28:21.129677 master-0 kubenswrapper[33867]: I0219 03:28:21.129661 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 03:28:21.159105 master-0 kubenswrapper[33867]: I0219 03:28:21.159050 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 19 03:28:21.187591 master-0 kubenswrapper[33867]: I0219 03:28:21.187526 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 19 03:28:21.204824 master-0 kubenswrapper[33867]: I0219 03:28:21.204754 33867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 03:28:21.270316 master-0 kubenswrapper[33867]: I0219 03:28:21.270139 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 03:28:21.277204 master-0 kubenswrapper[33867]: I0219 03:28:21.277148 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 19 03:28:21.355292 master-0 kubenswrapper[33867]: I0219 03:28:21.355185 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 19 03:28:21.358128 master-0 kubenswrapper[33867]: I0219 03:28:21.358051 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 19 03:28:21.365946 master-0 kubenswrapper[33867]: I0219 03:28:21.365879 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 03:28:21.379372 master-0 kubenswrapper[33867]: I0219 03:28:21.379286 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 19 03:28:21.574292 master-0 kubenswrapper[33867]: I0219 03:28:21.572166 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 03:28:21.574887 master-0 kubenswrapper[33867]: I0219 03:28:21.574522 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 19 03:28:21.618687 master-0 kubenswrapper[33867]: I0219 03:28:21.618635 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 03:28:21.640919 master-0 kubenswrapper[33867]: I0219 03:28:21.640869 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 03:28:21.662446 master-0 kubenswrapper[33867]: I0219 03:28:21.662381 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:28:21.775389 master-0 kubenswrapper[33867]: I0219 03:28:21.774283 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 19 03:28:21.775389 master-0 kubenswrapper[33867]: I0219 03:28:21.774521 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 19 03:28:21.890404 master-0 kubenswrapper[33867]: I0219 03:28:21.890326 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 03:28:21.935278 master-0 kubenswrapper[33867]: I0219 03:28:21.935189 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 19 03:28:21.956237 master-0 kubenswrapper[33867]: I0219 03:28:21.956187 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 19 03:28:21.960972 master-0 kubenswrapper[33867]: I0219 03:28:21.960915 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 03:28:21.962055 master-0 kubenswrapper[33867]: I0219 03:28:21.962024 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 03:28:22.005135 master-0 kubenswrapper[33867]: I0219 03:28:22.005055 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 03:28:22.055963 master-0 kubenswrapper[33867]: I0219 03:28:22.055880 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 03:28:22.105950 master-0 kubenswrapper[33867]: I0219 03:28:22.105879 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-5msgd" Feb 19 03:28:22.107334 master-0 kubenswrapper[33867]: I0219 03:28:22.107251 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 03:28:22.112304 master-0 kubenswrapper[33867]: I0219 03:28:22.112217 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 19 03:28:22.112756 master-0 kubenswrapper[33867]: I0219 03:28:22.112719 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 03:28:22.219046 master-0 kubenswrapper[33867]: I0219 03:28:22.218907 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 03:28:22.290984 master-0 kubenswrapper[33867]: I0219 03:28:22.290921 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 03:28:22.373301 master-0 kubenswrapper[33867]: I0219 03:28:22.373189 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 03:28:22.385810 master-0 kubenswrapper[33867]: I0219 03:28:22.385754 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-25h6f" Feb 19 03:28:22.434775 master-0 kubenswrapper[33867]: I0219 03:28:22.434704 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 19 03:28:22.435687 master-0 kubenswrapper[33867]: I0219 03:28:22.435640 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-26rv4" Feb 19 03:28:22.436182 master-0 kubenswrapper[33867]: I0219 03:28:22.436149 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 03:28:22.441654 master-0 kubenswrapper[33867]: I0219 03:28:22.441590 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-g8fsd" Feb 19 03:28:22.587346 master-0 kubenswrapper[33867]: I0219 03:28:22.586760 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 19 03:28:22.633906 master-0 kubenswrapper[33867]: I0219 03:28:22.633860 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 03:28:22.634138 master-0 kubenswrapper[33867]: I0219 03:28:22.634123 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 03:28:22.644358 master-0 kubenswrapper[33867]: I0219 03:28:22.644284 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 19 03:28:22.770066 master-0 kubenswrapper[33867]: I0219 03:28:22.769991 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 03:28:22.780767 master-0 kubenswrapper[33867]: I0219 03:28:22.780711 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 03:28:22.814379 master-0 kubenswrapper[33867]: I0219 03:28:22.814310 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7wq8f" Feb 19 03:28:22.867760 master-0 kubenswrapper[33867]: I0219 03:28:22.867640 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-mfb9m" Feb 19 03:28:22.969822 master-0 kubenswrapper[33867]: I0219 03:28:22.969755 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 19 03:28:23.009579 master-0 kubenswrapper[33867]: I0219 03:28:23.009529 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 03:28:23.014331 master-0 kubenswrapper[33867]: I0219 03:28:23.014273 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 03:28:23.046721 master-0 kubenswrapper[33867]: I0219 03:28:23.046642 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 03:28:23.081930 master-0 kubenswrapper[33867]: I0219 03:28:23.081874 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2h6in0gl25gpf" Feb 19 03:28:23.095695 master-0 kubenswrapper[33867]: I0219 03:28:23.095644 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 03:28:23.102436 master-0 kubenswrapper[33867]: I0219 03:28:23.102356 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 03:28:23.125807 master-0 kubenswrapper[33867]: I0219 03:28:23.125759 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 03:28:23.237172 master-0 kubenswrapper[33867]: I0219 03:28:23.237083 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 03:28:23.254014 master-0 kubenswrapper[33867]: I0219 03:28:23.253938 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 03:28:23.265065 master-0 kubenswrapper[33867]: I0219 03:28:23.264995 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 19 03:28:23.276775 master-0 kubenswrapper[33867]: I0219 03:28:23.276698 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 19 03:28:23.296970 master-0 kubenswrapper[33867]: I0219 03:28:23.296888 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 03:28:23.297699 master-0 kubenswrapper[33867]: I0219 03:28:23.297645 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 03:28:23.311175 master-0 kubenswrapper[33867]: I0219 03:28:23.311114 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 03:28:23.397295 master-0 kubenswrapper[33867]: I0219 03:28:23.397084 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 03:28:23.413291 master-0 kubenswrapper[33867]: I0219 03:28:23.413200 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 03:28:23.432999 master-0 kubenswrapper[33867]: I0219 03:28:23.432921 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 19 03:28:23.599920 master-0 kubenswrapper[33867]: I0219 03:28:23.599842 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 19 03:28:23.614037 master-0 kubenswrapper[33867]: I0219 03:28:23.613808 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 19 03:28:23.615236 master-0 kubenswrapper[33867]: I0219 03:28:23.614991 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5w4jw" Feb 19 03:28:23.635540 master-0 kubenswrapper[33867]: I0219 03:28:23.635480 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 19 03:28:23.645788 master-0 kubenswrapper[33867]: I0219 03:28:23.645754 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 03:28:23.703149 master-0 kubenswrapper[33867]: I0219 03:28:23.702904 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 03:28:23.718662 master-0 kubenswrapper[33867]: I0219 03:28:23.718623 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 19 03:28:23.729726 master-0 kubenswrapper[33867]: I0219 03:28:23.729678 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 03:28:23.753751 master-0 kubenswrapper[33867]: I0219 03:28:23.753728 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 03:28:23.816999 master-0 kubenswrapper[33867]: I0219 03:28:23.816962 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 19 03:28:23.834678 master-0 kubenswrapper[33867]: I0219 03:28:23.834625 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 03:28:23.846160 master-0 kubenswrapper[33867]: I0219 03:28:23.846102 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4ccfk8e5ng1ig" Feb 19 03:28:23.847400 master-0 kubenswrapper[33867]: I0219 03:28:23.847355 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 03:28:23.992398 master-0 kubenswrapper[33867]: I0219 03:28:23.992213 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 03:28:24.028279 master-0 kubenswrapper[33867]: I0219 03:28:24.028214 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mrjgz" Feb 19 03:28:24.031698 master-0 kubenswrapper[33867]: I0219 03:28:24.031682 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 03:28:24.076788 master-0 kubenswrapper[33867]: I0219 03:28:24.076723 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 03:28:24.103060 master-0 kubenswrapper[33867]: I0219 03:28:24.102998 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 19 03:28:24.135681 master-0 kubenswrapper[33867]: I0219 03:28:24.135622 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 03:28:24.197375 master-0 kubenswrapper[33867]: I0219 03:28:24.197304 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 03:28:24.259902 master-0 kubenswrapper[33867]: I0219 03:28:24.259700 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 19 03:28:24.294963 master-0 kubenswrapper[33867]: I0219 03:28:24.294899 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 03:28:24.316274 master-0 kubenswrapper[33867]: I0219 03:28:24.316041 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 03:28:24.319791 master-0 kubenswrapper[33867]: I0219 03:28:24.319582 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 03:28:24.341485 master-0 kubenswrapper[33867]: I0219 03:28:24.341287 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 19 03:28:24.360713 master-0 kubenswrapper[33867]: I0219 03:28:24.360509 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 03:28:24.375424 master-0 kubenswrapper[33867]: I0219 03:28:24.375362 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 03:28:24.398559 master-0 kubenswrapper[33867]: I0219 03:28:24.398502 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 03:28:24.503223 master-0 kubenswrapper[33867]: I0219 03:28:24.503172 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 03:28:24.533893 master-0 kubenswrapper[33867]: I0219 03:28:24.533753 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 03:28:24.670783 master-0 kubenswrapper[33867]: I0219 03:28:24.670722 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1e3s0akbul7uf" Feb 19 03:28:24.672584 master-0 kubenswrapper[33867]: I0219 03:28:24.672546 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 03:28:24.677159 master-0 kubenswrapper[33867]: I0219 03:28:24.677113 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 19 03:28:24.719615 master-0 kubenswrapper[33867]: I0219 03:28:24.719529 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 03:28:24.726554 master-0 kubenswrapper[33867]: I0219 03:28:24.726480 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 03:28:24.748102 master-0 kubenswrapper[33867]: I0219 03:28:24.748029 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 19 03:28:24.753431 master-0 kubenswrapper[33867]: I0219 03:28:24.753377 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 03:28:24.807147 master-0 kubenswrapper[33867]: I0219 03:28:24.806973 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 03:28:24.821854 master-0 kubenswrapper[33867]: I0219 03:28:24.821790 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:28:24.844667 master-0 kubenswrapper[33867]: I0219 03:28:24.844602 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-cddzx" Feb 19 03:28:24.847860 master-0 kubenswrapper[33867]: I0219 03:28:24.847792 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 03:28:24.904493 master-0 kubenswrapper[33867]: I0219 03:28:24.904269 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 03:28:24.922749 master-0 kubenswrapper[33867]: I0219 03:28:24.922655 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 03:28:24.962044 master-0 kubenswrapper[33867]: I0219 03:28:24.961998 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 03:28:25.029853 master-0 kubenswrapper[33867]: I0219 03:28:25.029805 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 03:28:25.037151 master-0 kubenswrapper[33867]: I0219 03:28:25.037052 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:28:25.037151 master-0 kubenswrapper[33867]: I0219 03:28:25.037133 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:28:25.037471 master-0 kubenswrapper[33867]: I0219 03:28:25.037191 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:28:25.037969 master-0 kubenswrapper[33867]: I0219 03:28:25.037937 33867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 03:28:25.038102 master-0 kubenswrapper[33867]: I0219 03:28:25.038063 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" containerID="cri-o://09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062" gracePeriod=30 Feb 19 03:28:25.054771 master-0 kubenswrapper[33867]: I0219 03:28:25.054693 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 19 03:28:25.259145 master-0 kubenswrapper[33867]: I0219 03:28:25.259039 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 03:28:25.269275 master-0 kubenswrapper[33867]: I0219 03:28:25.269183 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 03:28:25.273417 master-0 kubenswrapper[33867]: I0219 03:28:25.273360 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 03:28:25.342644 master-0 kubenswrapper[33867]: I0219 03:28:25.342549 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 03:28:25.350864 master-0 kubenswrapper[33867]: I0219 03:28:25.350783 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 03:28:25.358566 master-0 kubenswrapper[33867]: I0219 03:28:25.358504 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 03:28:25.400216 master-0 kubenswrapper[33867]: I0219 03:28:25.400158 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 03:28:25.431212 master-0 kubenswrapper[33867]: I0219 03:28:25.431082 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 03:28:25.454658 master-0 kubenswrapper[33867]: I0219 03:28:25.454590 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 03:28:25.488601 master-0 kubenswrapper[33867]: I0219 03:28:25.488527 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 19 03:28:25.489566 master-0 kubenswrapper[33867]: I0219 03:28:25.489536 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 19 03:28:25.532943 master-0 kubenswrapper[33867]: I0219 03:28:25.532767 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 03:28:25.561212 master-0 kubenswrapper[33867]: I0219 03:28:25.561125 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 03:28:25.575052 master-0 kubenswrapper[33867]: I0219 03:28:25.575007 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 03:28:25.625349 master-0 kubenswrapper[33867]: I0219 03:28:25.625228 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 19 03:28:25.665138 master-0 kubenswrapper[33867]: I0219 03:28:25.665070 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 03:28:25.713609 master-0 kubenswrapper[33867]: I0219 03:28:25.713549 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 03:28:25.750774 master-0 kubenswrapper[33867]: I0219 03:28:25.750536 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 19 03:28:25.760127 master-0 kubenswrapper[33867]: I0219 03:28:25.758385 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 19 03:28:25.781096 master-0 kubenswrapper[33867]: I0219 03:28:25.781026 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 03:28:25.807134 master-0 kubenswrapper[33867]: I0219 03:28:25.806992 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 19 03:28:25.905559 master-0 kubenswrapper[33867]: I0219 03:28:25.905495 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 03:28:26.000022 master-0 kubenswrapper[33867]: I0219 03:28:25.999929 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 19 03:28:26.005865 master-0 kubenswrapper[33867]: I0219 03:28:26.005821 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6bg2z" Feb 19 03:28:26.027906 master-0 kubenswrapper[33867]: I0219 03:28:26.027838 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 03:28:26.027906 master-0 kubenswrapper[33867]: I0219 03:28:26.027889 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 19 03:28:26.028308 master-0 kubenswrapper[33867]: I0219 03:28:26.027891 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-b5db9" Feb 19 03:28:26.072485 master-0 kubenswrapper[33867]: I0219 03:28:26.072359 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 03:28:26.109194 master-0 kubenswrapper[33867]: I0219 03:28:26.109115 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 03:28:26.115568 master-0 kubenswrapper[33867]: I0219 03:28:26.115532 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-jmtfb" Feb 19 03:28:26.159474 master-0 kubenswrapper[33867]: I0219 03:28:26.159421 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 03:28:26.291814 master-0 kubenswrapper[33867]: I0219 03:28:26.291764 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 03:28:26.304310 master-0 kubenswrapper[33867]: I0219 03:28:26.304245 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 03:28:26.421720 master-0 kubenswrapper[33867]: I0219 03:28:26.421622 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:28:26.422078 master-0 kubenswrapper[33867]: I0219 03:28:26.421923 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 03:28:26.449923 master-0 kubenswrapper[33867]: I0219 03:28:26.449854 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 03:28:26.570918 master-0 kubenswrapper[33867]: I0219 03:28:26.570824 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 19 03:28:26.618940 master-0 kubenswrapper[33867]: I0219 03:28:26.616448 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 19 03:28:26.657189 master-0 kubenswrapper[33867]: I0219 03:28:26.657117 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 03:28:26.709870 master-0 kubenswrapper[33867]: I0219 03:28:26.709677 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 03:28:26.746900 master-0 kubenswrapper[33867]: I0219 03:28:26.746836 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 03:28:26.757082 master-0 kubenswrapper[33867]: I0219 03:28:26.756996 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 03:28:26.762539 master-0 kubenswrapper[33867]: I0219 03:28:26.762486 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 03:28:26.826663 master-0 kubenswrapper[33867]: I0219 03:28:26.826610 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 03:28:26.857682 master-0 kubenswrapper[33867]: I0219 03:28:26.857622 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 19 03:28:26.912161 master-0 kubenswrapper[33867]: I0219 03:28:26.912080 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 19 03:28:26.925595 master-0 kubenswrapper[33867]: I0219 03:28:26.925536 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 03:28:26.997652 master-0 kubenswrapper[33867]: I0219 03:28:26.997507 33867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 03:28:27.014827 master-0 kubenswrapper[33867]: I0219 03:28:27.014748 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 03:28:27.039109 master-0 kubenswrapper[33867]: I0219 03:28:27.039032 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 03:28:27.049978 master-0 kubenswrapper[33867]: I0219 03:28:27.049878 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 19 03:28:27.069287 master-0 kubenswrapper[33867]: I0219 03:28:27.069189 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 19 03:28:27.097028 master-0 kubenswrapper[33867]: I0219 03:28:27.096963 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 03:28:27.157209 master-0 kubenswrapper[33867]: I0219 03:28:27.157134 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 03:28:27.157769 master-0 kubenswrapper[33867]: I0219 03:28:27.157744 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 19 03:28:27.167417 master-0 kubenswrapper[33867]: I0219 03:28:27.167356 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 03:28:27.215723 master-0 kubenswrapper[33867]: I0219 03:28:27.215655 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 19 03:28:27.274848 master-0 kubenswrapper[33867]: I0219 03:28:27.274715 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 19 03:28:27.321780 master-0 kubenswrapper[33867]: I0219 03:28:27.321686 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 03:28:27.358410 master-0 kubenswrapper[33867]: I0219 03:28:27.358364 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 03:28:27.382106 master-0 kubenswrapper[33867]: I0219 03:28:27.381996 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-g7dwh" Feb 19 03:28:27.396738 master-0 kubenswrapper[33867]: I0219 03:28:27.396658 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 19 03:28:27.421713 master-0 kubenswrapper[33867]: I0219 03:28:27.421619 33867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 03:28:27.430161 master-0 kubenswrapper[33867]: I0219 03:28:27.430109 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 03:28:27.511291 master-0 kubenswrapper[33867]: I0219 03:28:27.511206 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 03:28:27.528397 master-0 kubenswrapper[33867]: I0219 03:28:27.528178 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 03:28:27.592837 master-0 kubenswrapper[33867]: I0219 03:28:27.592773 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 03:28:27.742501 master-0 kubenswrapper[33867]: I0219 03:28:27.742427 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 03:28:27.774096 master-0 kubenswrapper[33867]: I0219 03:28:27.774022 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 03:28:27.793831 master-0 kubenswrapper[33867]: I0219 03:28:27.793661 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 19 03:28:27.880360 master-0 kubenswrapper[33867]: I0219 03:28:27.880312 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 03:28:27.892681 master-0 kubenswrapper[33867]: I0219 03:28:27.892631 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 03:28:27.992410 master-0 kubenswrapper[33867]: I0219 03:28:27.992369 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 03:28:28.099512 master-0 kubenswrapper[33867]: I0219 03:28:28.096938 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 03:28:28.117816 master-0 kubenswrapper[33867]: I0219 03:28:28.117762 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 03:28:28.185525 master-0 kubenswrapper[33867]: I0219 03:28:28.185419 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 03:28:28.263096 master-0 kubenswrapper[33867]: I0219 03:28:28.263024 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 03:28:28.353209 master-0 kubenswrapper[33867]: I0219 03:28:28.353028 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 03:28:28.441638 master-0 kubenswrapper[33867]: I0219 03:28:28.441538 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 03:28:28.505928 master-0 kubenswrapper[33867]: I0219 03:28:28.505855 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 19 03:28:28.508043 master-0 kubenswrapper[33867]: I0219 03:28:28.508007 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 03:28:28.543476 master-0 kubenswrapper[33867]: I0219 03:28:28.543390 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 03:28:28.574222 master-0 kubenswrapper[33867]: I0219 03:28:28.574152 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 03:28:28.577251 master-0 kubenswrapper[33867]: I0219 03:28:28.577209 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 03:28:28.639833 master-0 kubenswrapper[33867]: I0219 03:28:28.639772 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 03:28:28.693549 master-0 kubenswrapper[33867]: I0219 03:28:28.693439 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:28:28.694996 master-0 kubenswrapper[33867]: I0219 03:28:28.694962 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 03:28:28.714537 master-0 kubenswrapper[33867]: I0219 03:28:28.714501 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 03:28:28.729387 master-0 kubenswrapper[33867]: I0219 03:28:28.729338 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:28:28.939336 master-0 kubenswrapper[33867]: I0219 03:28:28.939150 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 03:28:28.983907 master-0 kubenswrapper[33867]: I0219 03:28:28.983823 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 03:28:29.004297 master-0 kubenswrapper[33867]: I0219 03:28:29.004165 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 19 03:28:29.055774 master-0 kubenswrapper[33867]: I0219 03:28:29.055690 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 19 03:28:29.134895 master-0 kubenswrapper[33867]: I0219 03:28:29.134833 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 19 03:28:29.153781 master-0 kubenswrapper[33867]: I0219 03:28:29.153737 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 03:28:29.195453 master-0 kubenswrapper[33867]: I0219 03:28:29.195319 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 03:28:29.202917 master-0 kubenswrapper[33867]: I0219 03:28:29.202870 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 19 03:28:29.239641 master-0 kubenswrapper[33867]: I0219 03:28:29.239522 33867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 03:28:29.247628 master-0 kubenswrapper[33867]: I0219 03:28:29.247557 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-586d7bfb96-dg45z","openshift-console/console-6b9ffbb744-xzn8r","openshift-kube-apiserver/kube-apiserver-master-0","openshift-console/console-677f65b5df-p8qrj"] Feb 19 03:28:29.247784 master-0 kubenswrapper[33867]: I0219 03:28:29.247665 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 19 03:28:29.254841 master-0 kubenswrapper[33867]: I0219 03:28:29.254674 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 19 03:28:29.264774 master-0 kubenswrapper[33867]: I0219 03:28:29.264714 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 03:28:29.277949 master-0 kubenswrapper[33867]: I0219 03:28:29.277875 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=22.277857024 podStartE2EDuration="22.277857024s" podCreationTimestamp="2026-02-19 03:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:28:29.275146467 +0000 UTC m=+314.571817078" watchObservedRunningTime="2026-02-19 03:28:29.277857024 +0000 UTC m=+314.574527635" Feb 19 03:28:29.299213 master-0 kubenswrapper[33867]: I0219 03:28:29.299165 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 03:28:29.430598 master-0 kubenswrapper[33867]: I0219 03:28:29.430477 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 19 03:28:29.431095 master-0 kubenswrapper[33867]: I0219 03:28:29.431045 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:28:29.436405 master-0 kubenswrapper[33867]: I0219 03:28:29.436349 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:28:29.497874 master-0 kubenswrapper[33867]: I0219 03:28:29.497713 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 19 03:28:29.552926 master-0 kubenswrapper[33867]: I0219 03:28:29.552859 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 19 03:28:29.569566 master-0 kubenswrapper[33867]: I0219 03:28:29.569494 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 03:28:29.638863 master-0 kubenswrapper[33867]: I0219 03:28:29.638807 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 03:28:29.679802 master-0 kubenswrapper[33867]: I0219 03:28:29.679737 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 03:28:29.681290 master-0 kubenswrapper[33867]: I0219 03:28:29.681208 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 03:28:29.718830 master-0 kubenswrapper[33867]: I0219 03:28:29.718762 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 19 03:28:29.856145 master-0 kubenswrapper[33867]: I0219 03:28:29.855977 33867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 03:28:29.903301 master-0 kubenswrapper[33867]: I0219 03:28:29.895036 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 03:28:29.962770 master-0 kubenswrapper[33867]: I0219 03:28:29.962684 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 19 03:28:29.963019 master-0 kubenswrapper[33867]: I0219 03:28:29.962925 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="62f9e181fcd823e864851fdb74fd8d37" containerName="startup-monitor" containerID="cri-o://a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18" gracePeriod=5 Feb 19 03:28:29.992051 master-0 kubenswrapper[33867]: I0219 03:28:29.991980 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 19 03:28:30.095789 master-0 kubenswrapper[33867]: I0219 03:28:30.095733 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kjppx" Feb 19 03:28:30.123950 master-0 kubenswrapper[33867]: I0219 03:28:30.123831 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 19 03:28:30.136442 master-0 kubenswrapper[33867]: I0219 03:28:30.136369 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 03:28:30.141715 master-0 kubenswrapper[33867]: I0219 03:28:30.141683 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 03:28:30.175845 master-0 kubenswrapper[33867]: I0219 03:28:30.175783 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 03:28:30.236961 master-0 kubenswrapper[33867]: I0219 03:28:30.236892 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 03:28:30.287428 master-0 kubenswrapper[33867]: I0219 03:28:30.287355 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 19 03:28:30.308401 master-0 kubenswrapper[33867]: I0219 03:28:30.308318 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-qlddr" Feb 19 03:28:30.352774 master-0 kubenswrapper[33867]: I0219 03:28:30.352697 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 03:28:30.376111 master-0 kubenswrapper[33867]: I0219 03:28:30.376020 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 03:28:30.405495 master-0 kubenswrapper[33867]: I0219 03:28:30.405398 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-496bn" Feb 19 03:28:30.434931 master-0 kubenswrapper[33867]: I0219 03:28:30.434819 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 03:28:30.518547 master-0 kubenswrapper[33867]: I0219 03:28:30.518449 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 03:28:30.643936 master-0 kubenswrapper[33867]: I0219 03:28:30.643740 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 03:28:30.670562 master-0 kubenswrapper[33867]: I0219 03:28:30.670460 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 19 03:28:30.682528 master-0 kubenswrapper[33867]: I0219 03:28:30.682450 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 03:28:30.733989 master-0 kubenswrapper[33867]: I0219 03:28:30.733615 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 19 03:28:30.767631 master-0 kubenswrapper[33867]: I0219 03:28:30.767546 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 03:28:30.787301 master-0 kubenswrapper[33867]: I0219 03:28:30.787237 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 03:28:30.883892 master-0 kubenswrapper[33867]: I0219 03:28:30.883082 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 03:28:30.926225 master-0 kubenswrapper[33867]: I0219 03:28:30.926114 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 03:28:30.963809 master-0 kubenswrapper[33867]: I0219 03:28:30.963758 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" path="/var/lib/kubelet/pods/224edf60-62d9-4e76-b1d7-6e6b92e8ad00/volumes" Feb 19 03:28:30.964505 master-0 kubenswrapper[33867]: I0219 03:28:30.964472 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34af636-294e-431e-b676-6d059a537a5b" path="/var/lib/kubelet/pods/a34af636-294e-431e-b676-6d059a537a5b/volumes" Feb 19 03:28:30.965033 master-0 kubenswrapper[33867]: I0219 03:28:30.965006 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" path="/var/lib/kubelet/pods/e376877b-f5c6-4a73-a959-cde9c466252a/volumes" Feb 19 03:28:31.083443 master-0 kubenswrapper[33867]: I0219 03:28:31.083333 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:28:31.093173 master-0 kubenswrapper[33867]: I0219 03:28:31.093110 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:28:31.113152 master-0 kubenswrapper[33867]: I0219 03:28:31.112914 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 03:28:31.138864 master-0 kubenswrapper[33867]: I0219 03:28:31.138767 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 19 03:28:31.215621 master-0 kubenswrapper[33867]: I0219 03:28:31.215423 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 03:28:31.490145 master-0 kubenswrapper[33867]: I0219 03:28:31.489944 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 03:28:31.730371 master-0 kubenswrapper[33867]: I0219 03:28:31.730247 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 03:28:31.861874 master-0 kubenswrapper[33867]: I0219 03:28:31.861760 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 03:28:31.984541 master-0 kubenswrapper[33867]: I0219 03:28:31.984469 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 03:28:32.111543 master-0 kubenswrapper[33867]: I0219 03:28:32.111445 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 19 03:28:32.163542 master-0 kubenswrapper[33867]: I0219 03:28:32.163465 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 03:28:32.300940 master-0 kubenswrapper[33867]: I0219 03:28:32.300838 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 03:28:32.373543 master-0 kubenswrapper[33867]: I0219 03:28:32.373475 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 19 03:28:32.533430 master-0 kubenswrapper[33867]: I0219 03:28:32.533294 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 03:28:32.635326 master-0 kubenswrapper[33867]: I0219 03:28:32.635267 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p55fn" Feb 19 03:28:32.639620 master-0 kubenswrapper[33867]: I0219 03:28:32.639567 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 19 03:28:32.670814 master-0 kubenswrapper[33867]: I0219 03:28:32.670746 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 19 03:28:32.952960 master-0 kubenswrapper[33867]: I0219 03:28:32.952852 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 19 03:28:33.069148 master-0 kubenswrapper[33867]: I0219 03:28:33.069090 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 03:28:33.222833 master-0 kubenswrapper[33867]: I0219 03:28:33.222685 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 19 03:28:34.914208 master-0 kubenswrapper[33867]: I0219 03:28:34.914127 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 19 03:28:35.559692 master-0 kubenswrapper[33867]: I0219 03:28:35.559640 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_62f9e181fcd823e864851fdb74fd8d37/startup-monitor/0.log" Feb 19 03:28:35.560008 master-0 kubenswrapper[33867]: I0219 03:28:35.559732 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:28:35.605674 master-0 kubenswrapper[33867]: I0219 03:28:35.605633 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests\") pod \"62f9e181fcd823e864851fdb74fd8d37\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " Feb 19 03:28:35.605856 master-0 kubenswrapper[33867]: I0219 03:28:35.605803 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests" (OuterVolumeSpecName: "manifests") pod "62f9e181fcd823e864851fdb74fd8d37" (UID: "62f9e181fcd823e864851fdb74fd8d37"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:28:35.606052 master-0 kubenswrapper[33867]: I0219 03:28:35.606028 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir\") pod \"62f9e181fcd823e864851fdb74fd8d37\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " Feb 19 03:28:35.606192 master-0 kubenswrapper[33867]: I0219 03:28:35.606169 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir\") pod \"62f9e181fcd823e864851fdb74fd8d37\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " Feb 19 03:28:35.606364 master-0 kubenswrapper[33867]: I0219 03:28:35.606308 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "62f9e181fcd823e864851fdb74fd8d37" (UID: "62f9e181fcd823e864851fdb74fd8d37"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:28:35.606520 master-0 kubenswrapper[33867]: I0219 03:28:35.606498 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock\") pod \"62f9e181fcd823e864851fdb74fd8d37\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " Feb 19 03:28:35.606665 master-0 kubenswrapper[33867]: I0219 03:28:35.606645 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log\") pod \"62f9e181fcd823e864851fdb74fd8d37\" (UID: \"62f9e181fcd823e864851fdb74fd8d37\") " Feb 19 03:28:35.606848 master-0 kubenswrapper[33867]: I0219 03:28:35.606567 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock" (OuterVolumeSpecName: "var-lock") pod "62f9e181fcd823e864851fdb74fd8d37" (UID: "62f9e181fcd823e864851fdb74fd8d37"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:28:35.606914 master-0 kubenswrapper[33867]: I0219 03:28:35.606726 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log" (OuterVolumeSpecName: "var-log") pod "62f9e181fcd823e864851fdb74fd8d37" (UID: "62f9e181fcd823e864851fdb74fd8d37"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:28:35.607290 master-0 kubenswrapper[33867]: I0219 03:28:35.607239 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:28:35.607391 master-0 kubenswrapper[33867]: I0219 03:28:35.607377 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:28:35.607484 master-0 kubenswrapper[33867]: I0219 03:28:35.607467 33867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-var-log\") on node \"master-0\" DevicePath \"\"" Feb 19 03:28:35.607565 master-0 kubenswrapper[33867]: I0219 03:28:35.607552 33867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-manifests\") on node \"master-0\" DevicePath \"\"" Feb 19 03:28:35.612315 master-0 kubenswrapper[33867]: I0219 03:28:35.612241 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "62f9e181fcd823e864851fdb74fd8d37" (UID: "62f9e181fcd823e864851fdb74fd8d37"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:28:35.701862 master-0 kubenswrapper[33867]: I0219 03:28:35.701777 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_62f9e181fcd823e864851fdb74fd8d37/startup-monitor/0.log" Feb 19 03:28:35.702244 master-0 kubenswrapper[33867]: I0219 03:28:35.701868 33867 generic.go:334] "Generic (PLEG): container finished" podID="62f9e181fcd823e864851fdb74fd8d37" containerID="a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18" exitCode=137 Feb 19 03:28:35.702513 master-0 kubenswrapper[33867]: I0219 03:28:35.702441 33867 scope.go:117] "RemoveContainer" containerID="a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18" Feb 19 03:28:35.702664 master-0 kubenswrapper[33867]: I0219 03:28:35.702486 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 19 03:28:35.710414 master-0 kubenswrapper[33867]: I0219 03:28:35.710348 33867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/62f9e181fcd823e864851fdb74fd8d37-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:28:35.741767 master-0 kubenswrapper[33867]: I0219 03:28:35.741703 33867 scope.go:117] "RemoveContainer" containerID="a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18" Feb 19 03:28:35.742372 master-0 kubenswrapper[33867]: E0219 03:28:35.742319 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18\": container with ID starting with a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18 not found: ID does not exist" containerID="a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18" Feb 19 03:28:35.742494 master-0 kubenswrapper[33867]: I0219 03:28:35.742361 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18"} err="failed to get container status \"a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18\": rpc error: code = NotFound desc = could not find container \"a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18\": container with ID starting with a9a934098da0eb4a16a0a388cc3962cb21af9bcc95de4f7218b0b46359fd5d18 not found: ID does not exist" Feb 19 03:28:36.966642 master-0 kubenswrapper[33867]: I0219 03:28:36.966512 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f9e181fcd823e864851fdb74fd8d37" path="/var/lib/kubelet/pods/62f9e181fcd823e864851fdb74fd8d37/volumes" Feb 19 03:28:45.470065 master-0 kubenswrapper[33867]: I0219 03:28:45.469979 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 03:28:45.545789 master-0 kubenswrapper[33867]: I0219 03:28:45.545705 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xq85v" Feb 19 03:28:50.051808 master-0 kubenswrapper[33867]: I0219 03:28:50.051731 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 03:28:50.595607 master-0 kubenswrapper[33867]: I0219 03:28:50.595532 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 03:28:54.300353 master-0 kubenswrapper[33867]: I0219 03:28:54.300289 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 19 03:28:55.867602 master-0 kubenswrapper[33867]: I0219 03:28:55.867475 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/1.log" Feb 19 03:28:55.869605 master-0 kubenswrapper[33867]: I0219 03:28:55.869550 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/0.log" Feb 19 03:28:55.869672 master-0 kubenswrapper[33867]: I0219 03:28:55.869644 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062" exitCode=137 Feb 19 03:28:55.869775 master-0 kubenswrapper[33867]: I0219 03:28:55.869704 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerDied","Data":"09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062"} Feb 19 03:28:55.869831 master-0 kubenswrapper[33867]: I0219 03:28:55.869807 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"54d93c932fb6b580283b25f4adc52bd3","Type":"ContainerStarted","Data":"2bfdd08c2f9d5dd55aca73518d58b45204430b97a64cd8f23d4d0084858c4cc5"} Feb 19 03:28:55.869906 master-0 kubenswrapper[33867]: I0219 03:28:55.869888 33867 scope.go:117] "RemoveContainer" containerID="b72fc1a1be5f58b5d59ac3d6f6c214e3a5a59e2746f4da0b54694b182f52c426" Feb 19 03:28:56.835567 master-0 kubenswrapper[33867]: I0219 03:28:56.835495 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 03:28:56.885001 master-0 kubenswrapper[33867]: I0219 03:28:56.884933 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/1.log" Feb 19 03:29:05.036570 master-0 kubenswrapper[33867]: I0219 03:29:05.036485 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:29:05.036570 master-0 kubenswrapper[33867]: I0219 03:29:05.036564 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:29:05.041323 master-0 kubenswrapper[33867]: I0219 03:29:05.041242 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:29:05.961331 master-0 kubenswrapper[33867]: I0219 03:29:05.961287 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:29:06.003445 master-0 kubenswrapper[33867]: I0219 03:29:06.003356 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 03:29:08.725385 master-0 kubenswrapper[33867]: I0219 03:29:08.725323 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 03:29:12.496826 master-0 kubenswrapper[33867]: I0219 03:29:12.496747 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tvshx"] Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: E0219 03:29:12.497138 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" containerName="console" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: I0219 03:29:12.497154 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" containerName="console" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: E0219 03:29:12.497205 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f9e181fcd823e864851fdb74fd8d37" containerName="startup-monitor" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: I0219 03:29:12.497216 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f9e181fcd823e864851fdb74fd8d37" containerName="startup-monitor" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: E0219 03:29:12.497229 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" containerName="installer" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: I0219 03:29:12.497238 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" containerName="installer" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: E0219 03:29:12.497406 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: I0219 03:29:12.497417 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: E0219 03:29:12.497446 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" Feb 19 03:29:12.497486 master-0 kubenswrapper[33867]: I0219 03:29:12.497454 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" Feb 19 03:29:12.497786 master-0 kubenswrapper[33867]: I0219 03:29:12.497635 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f9e181fcd823e864851fdb74fd8d37" containerName="startup-monitor" Feb 19 03:29:12.497786 master-0 kubenswrapper[33867]: I0219 03:29:12.497650 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34af636-294e-431e-b676-6d059a537a5b" containerName="console" Feb 19 03:29:12.497786 master-0 kubenswrapper[33867]: I0219 03:29:12.497666 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="224edf60-62d9-4e76-b1d7-6e6b92e8ad00" containerName="console" Feb 19 03:29:12.497786 master-0 kubenswrapper[33867]: I0219 03:29:12.497696 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7adce7b-f079-455e-8377-84c40cfc2557" containerName="installer" Feb 19 03:29:12.497786 master-0 kubenswrapper[33867]: I0219 03:29:12.497711 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e376877b-f5c6-4a73-a959-cde9c466252a" containerName="console" Feb 19 03:29:12.498320 master-0 kubenswrapper[33867]: I0219 03:29:12.498285 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.502168 master-0 kubenswrapper[33867]: I0219 03:29:12.501674 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 03:29:12.502168 master-0 kubenswrapper[33867]: I0219 03:29:12.501933 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 03:29:12.522092 master-0 kubenswrapper[33867]: I0219 03:29:12.521975 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tvshx"] Feb 19 03:29:12.534079 master-0 kubenswrapper[33867]: I0219 03:29:12.531063 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:29:12.593498 master-0 kubenswrapper[33867]: I0219 03:29:12.593416 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:29:12.594591 master-0 kubenswrapper[33867]: I0219 03:29:12.594565 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.613939 master-0 kubenswrapper[33867]: I0219 03:29:12.613862 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:29:12.682180 master-0 kubenswrapper[33867]: I0219 03:29:12.682115 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.682427 master-0 kubenswrapper[33867]: I0219 03:29:12.682202 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.784382 master-0 kubenswrapper[33867]: I0219 03:29:12.784183 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.784382 master-0 kubenswrapper[33867]: I0219 03:29:12.784267 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86kqc\" (UniqueName: \"kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.784382 master-0 kubenswrapper[33867]: I0219 03:29:12.784306 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.784382 master-0 kubenswrapper[33867]: I0219 03:29:12.784333 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.784382 master-0 kubenswrapper[33867]: I0219 03:29:12.784371 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.785047 master-0 kubenswrapper[33867]: I0219 03:29:12.784485 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.785047 master-0 kubenswrapper[33867]: I0219 03:29:12.784538 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.785047 master-0 kubenswrapper[33867]: I0219 03:29:12.784574 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.785047 master-0 kubenswrapper[33867]: I0219 03:29:12.784628 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.786308 master-0 kubenswrapper[33867]: I0219 03:29:12.785646 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.787626 master-0 kubenswrapper[33867]: I0219 03:29:12.787608 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b01172bb-2c9c-44ee-a089-2ceacc38ab9f-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tvshx\" (UID: \"b01172bb-2c9c-44ee-a089-2ceacc38ab9f\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.822774 master-0 kubenswrapper[33867]: I0219 03:29:12.822715 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" Feb 19 03:29:12.885838 master-0 kubenswrapper[33867]: I0219 03:29:12.885781 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.885838 master-0 kubenswrapper[33867]: I0219 03:29:12.885835 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86kqc\" (UniqueName: \"kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.886163 master-0 kubenswrapper[33867]: I0219 03:29:12.886007 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.886163 master-0 kubenswrapper[33867]: I0219 03:29:12.886068 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.886163 master-0 kubenswrapper[33867]: I0219 03:29:12.886128 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.886387 master-0 kubenswrapper[33867]: I0219 03:29:12.886247 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.886387 master-0 kubenswrapper[33867]: I0219 03:29:12.886322 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.887054 master-0 kubenswrapper[33867]: I0219 03:29:12.886818 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.887054 master-0 kubenswrapper[33867]: I0219 03:29:12.886873 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.887252 master-0 kubenswrapper[33867]: I0219 03:29:12.887102 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.888917 master-0 kubenswrapper[33867]: I0219 03:29:12.888348 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.889245 master-0 kubenswrapper[33867]: I0219 03:29:12.889185 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.890179 master-0 kubenswrapper[33867]: I0219 03:29:12.889823 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.910749 master-0 kubenswrapper[33867]: I0219 03:29:12.906842 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86kqc\" (UniqueName: \"kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc\") pod \"console-69658754cd-pqnxr\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:12.933518 master-0 kubenswrapper[33867]: I0219 03:29:12.933449 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:13.251466 master-0 kubenswrapper[33867]: I0219 03:29:13.251396 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tvshx"] Feb 19 03:29:13.339895 master-0 kubenswrapper[33867]: I0219 03:29:13.339827 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:29:13.341858 master-0 kubenswrapper[33867]: W0219 03:29:13.341795 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod565704da_61cc_4b91_87ab_4d4f50255540.slice/crio-522f5c4fd94734507d429295076d3ea6a64b995eb0d18d61a67eb7d301f2576a WatchSource:0}: Error finding container 522f5c4fd94734507d429295076d3ea6a64b995eb0d18d61a67eb7d301f2576a: Status 404 returned error can't find the container with id 522f5c4fd94734507d429295076d3ea6a64b995eb0d18d61a67eb7d301f2576a Feb 19 03:29:13.430788 master-0 kubenswrapper[33867]: I0219 03:29:13.430749 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 03:29:14.016870 master-0 kubenswrapper[33867]: I0219 03:29:14.016797 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69658754cd-pqnxr" event={"ID":"565704da-61cc-4b91-87ab-4d4f50255540","Type":"ContainerStarted","Data":"4ac5445e35b4ffd076492d1e6de5b8b93cdc579db6ef79a43ecd83818ad61639"} Feb 19 03:29:14.016870 master-0 kubenswrapper[33867]: I0219 03:29:14.016869 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69658754cd-pqnxr" event={"ID":"565704da-61cc-4b91-87ab-4d4f50255540","Type":"ContainerStarted","Data":"522f5c4fd94734507d429295076d3ea6a64b995eb0d18d61a67eb7d301f2576a"} Feb 19 03:29:14.020282 master-0 kubenswrapper[33867]: I0219 03:29:14.020215 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" event={"ID":"b01172bb-2c9c-44ee-a089-2ceacc38ab9f","Type":"ContainerStarted","Data":"8827e66a097fff7a14745e9dca0d7650b0372d6ebcb30125598b8d572b2df368"} Feb 19 03:29:14.045127 master-0 kubenswrapper[33867]: I0219 03:29:14.045004 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69658754cd-pqnxr" podStartSLOduration=2.044959784 podStartE2EDuration="2.044959784s" podCreationTimestamp="2026-02-19 03:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:29:14.039408247 +0000 UTC m=+359.336078878" watchObservedRunningTime="2026-02-19 03:29:14.044959784 +0000 UTC m=+359.341630405" Feb 19 03:29:15.028953 master-0 kubenswrapper[33867]: I0219 03:29:15.028908 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" event={"ID":"b01172bb-2c9c-44ee-a089-2ceacc38ab9f","Type":"ContainerStarted","Data":"834e60cbd4f1077d161c66c365d36ed7d7739a08537f8b90796c8d702af70303"} Feb 19 03:29:15.043883 master-0 kubenswrapper[33867]: I0219 03:29:15.043807 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-tvshx" podStartSLOduration=1.8110426149999999 podStartE2EDuration="3.043785689s" podCreationTimestamp="2026-02-19 03:29:12 +0000 UTC" firstStartedPulling="2026-02-19 03:29:13.262141782 +0000 UTC m=+358.558812393" lastFinishedPulling="2026-02-19 03:29:14.494884856 +0000 UTC m=+359.791555467" observedRunningTime="2026-02-19 03:29:15.043169111 +0000 UTC m=+360.339839732" watchObservedRunningTime="2026-02-19 03:29:15.043785689 +0000 UTC m=+360.340456300" Feb 19 03:29:22.934667 master-0 kubenswrapper[33867]: I0219 03:29:22.934596 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:22.934667 master-0 kubenswrapper[33867]: I0219 03:29:22.934677 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:22.941554 master-0 kubenswrapper[33867]: I0219 03:29:22.941497 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:23.092833 master-0 kubenswrapper[33867]: I0219 03:29:23.092758 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:29:23.237992 master-0 kubenswrapper[33867]: I0219 03:29:23.237729 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:29:37.569443 master-0 kubenswrapper[33867]: I0219 03:29:37.569336 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-84d59b44c5-nczqx" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" containerID="cri-o://525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3" gracePeriod=15 Feb 19 03:29:37.669514 master-0 kubenswrapper[33867]: E0219 03:29:37.669407 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b0e9bf_7094_43f4_9904_aa27aa9d7b9a.slice/crio-conmon-525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:29:37.669514 master-0 kubenswrapper[33867]: E0219 03:29:37.669479 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b0e9bf_7094_43f4_9904_aa27aa9d7b9a.slice/crio-525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b0e9bf_7094_43f4_9904_aa27aa9d7b9a.slice/crio-conmon-525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:29:38.167445 master-0 kubenswrapper[33867]: I0219 03:29:38.166710 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84d59b44c5-nczqx_f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a/console/0.log" Feb 19 03:29:38.167445 master-0 kubenswrapper[33867]: I0219 03:29:38.166816 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:29:38.215899 master-0 kubenswrapper[33867]: I0219 03:29:38.215839 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84d59b44c5-nczqx_f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a/console/0.log" Feb 19 03:29:38.216152 master-0 kubenswrapper[33867]: I0219 03:29:38.215908 33867 generic.go:334] "Generic (PLEG): container finished" podID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerID="525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3" exitCode=2 Feb 19 03:29:38.216152 master-0 kubenswrapper[33867]: I0219 03:29:38.215951 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84d59b44c5-nczqx" event={"ID":"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a","Type":"ContainerDied","Data":"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3"} Feb 19 03:29:38.216152 master-0 kubenswrapper[33867]: I0219 03:29:38.215984 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84d59b44c5-nczqx" event={"ID":"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a","Type":"ContainerDied","Data":"f2194d72f0729162ac8f722d88a431e0bc7bdf989537e0d69b1698aca0af4aef"} Feb 19 03:29:38.216152 master-0 kubenswrapper[33867]: I0219 03:29:38.216004 33867 scope.go:117] "RemoveContainer" containerID="525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3" Feb 19 03:29:38.216740 master-0 kubenswrapper[33867]: I0219 03:29:38.216182 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84d59b44c5-nczqx" Feb 19 03:29:38.231150 master-0 kubenswrapper[33867]: I0219 03:29:38.231103 33867 scope.go:117] "RemoveContainer" containerID="525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3" Feb 19 03:29:38.231601 master-0 kubenswrapper[33867]: E0219 03:29:38.231543 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3\": container with ID starting with 525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3 not found: ID does not exist" containerID="525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3" Feb 19 03:29:38.231685 master-0 kubenswrapper[33867]: I0219 03:29:38.231604 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3"} err="failed to get container status \"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3\": rpc error: code = NotFound desc = could not find container \"525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3\": container with ID starting with 525e605b6ed61a84e59304d9abe52a6dd3e96fe0a5a678bd87aa7d8e2abb48d3 not found: ID does not exist" Feb 19 03:29:38.344861 master-0 kubenswrapper[33867]: I0219 03:29:38.344747 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345097 master-0 kubenswrapper[33867]: I0219 03:29:38.344950 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345097 master-0 kubenswrapper[33867]: I0219 03:29:38.345020 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345192 master-0 kubenswrapper[33867]: I0219 03:29:38.345129 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-722wv\" (UniqueName: \"kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345192 master-0 kubenswrapper[33867]: I0219 03:29:38.345175 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345452 master-0 kubenswrapper[33867]: I0219 03:29:38.345410 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345452 master-0 kubenswrapper[33867]: I0219 03:29:38.345449 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca\") pod \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\" (UID: \"f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a\") " Feb 19 03:29:38.345643 master-0 kubenswrapper[33867]: I0219 03:29:38.345570 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:38.346628 master-0 kubenswrapper[33867]: I0219 03:29:38.346400 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config" (OuterVolumeSpecName: "console-config") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:38.346628 master-0 kubenswrapper[33867]: I0219 03:29:38.346422 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca" (OuterVolumeSpecName: "service-ca") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:38.346628 master-0 kubenswrapper[33867]: I0219 03:29:38.346578 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:38.347405 master-0 kubenswrapper[33867]: I0219 03:29:38.347344 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.347405 master-0 kubenswrapper[33867]: I0219 03:29:38.347394 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.347405 master-0 kubenswrapper[33867]: I0219 03:29:38.347411 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.347563 master-0 kubenswrapper[33867]: I0219 03:29:38.347424 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.348878 master-0 kubenswrapper[33867]: I0219 03:29:38.348779 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:29:38.352219 master-0 kubenswrapper[33867]: I0219 03:29:38.352077 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv" (OuterVolumeSpecName: "kube-api-access-722wv") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "kube-api-access-722wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:29:38.353064 master-0 kubenswrapper[33867]: I0219 03:29:38.352924 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" (UID: "f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:29:38.449187 master-0 kubenswrapper[33867]: I0219 03:29:38.448871 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.449187 master-0 kubenswrapper[33867]: I0219 03:29:38.448955 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.449187 master-0 kubenswrapper[33867]: I0219 03:29:38.448987 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-722wv\" (UniqueName: \"kubernetes.io/projected/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a-kube-api-access-722wv\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:38.558427 master-0 kubenswrapper[33867]: I0219 03:29:38.557446 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:29:38.563044 master-0 kubenswrapper[33867]: I0219 03:29:38.563006 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-84d59b44c5-nczqx"] Feb 19 03:29:38.965289 master-0 kubenswrapper[33867]: I0219 03:29:38.965211 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" path="/var/lib/kubelet/pods/f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a/volumes" Feb 19 03:29:48.283558 master-0 kubenswrapper[33867]: I0219 03:29:48.283405 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64f8f69b7-bnncp" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" containerID="cri-o://30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023" gracePeriod=15 Feb 19 03:29:48.730812 master-0 kubenswrapper[33867]: I0219 03:29:48.729725 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64f8f69b7-bnncp_88c5b877-feea-49a3-b528-c24d46500a36/console/0.log" Feb 19 03:29:48.730812 master-0 kubenswrapper[33867]: I0219 03:29:48.729797 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:29:48.745272 master-0 kubenswrapper[33867]: I0219 03:29:48.744958 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745272 master-0 kubenswrapper[33867]: I0219 03:29:48.745167 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745272 master-0 kubenswrapper[33867]: I0219 03:29:48.745195 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745272 master-0 kubenswrapper[33867]: I0219 03:29:48.745236 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745700 master-0 kubenswrapper[33867]: I0219 03:29:48.745319 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745700 master-0 kubenswrapper[33867]: I0219 03:29:48.745384 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rxdv\" (UniqueName: \"kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745700 master-0 kubenswrapper[33867]: I0219 03:29:48.745545 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca\") pod \"88c5b877-feea-49a3-b528-c24d46500a36\" (UID: \"88c5b877-feea-49a3-b528-c24d46500a36\") " Feb 19 03:29:48.745896 master-0 kubenswrapper[33867]: I0219 03:29:48.745796 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config" (OuterVolumeSpecName: "console-config") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:48.745896 master-0 kubenswrapper[33867]: I0219 03:29:48.745828 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:48.747362 master-0 kubenswrapper[33867]: I0219 03:29:48.746419 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca" (OuterVolumeSpecName: "service-ca") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:48.747362 master-0 kubenswrapper[33867]: I0219 03:29:48.746568 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.747362 master-0 kubenswrapper[33867]: I0219 03:29:48.746591 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.747362 master-0 kubenswrapper[33867]: I0219 03:29:48.746603 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.747362 master-0 kubenswrapper[33867]: I0219 03:29:48.746835 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:29:48.751587 master-0 kubenswrapper[33867]: I0219 03:29:48.748818 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:29:48.756767 master-0 kubenswrapper[33867]: I0219 03:29:48.752595 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:29:48.756767 master-0 kubenswrapper[33867]: I0219 03:29:48.753603 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv" (OuterVolumeSpecName: "kube-api-access-4rxdv") pod "88c5b877-feea-49a3-b528-c24d46500a36" (UID: "88c5b877-feea-49a3-b528-c24d46500a36"). InnerVolumeSpecName "kube-api-access-4rxdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:29:48.848873 master-0 kubenswrapper[33867]: I0219 03:29:48.848785 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.848873 master-0 kubenswrapper[33867]: I0219 03:29:48.848838 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88c5b877-feea-49a3-b528-c24d46500a36-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.848873 master-0 kubenswrapper[33867]: I0219 03:29:48.848855 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b877-feea-49a3-b528-c24d46500a36-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:48.848873 master-0 kubenswrapper[33867]: I0219 03:29:48.848871 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rxdv\" (UniqueName: \"kubernetes.io/projected/88c5b877-feea-49a3-b528-c24d46500a36-kube-api-access-4rxdv\") on node \"master-0\" DevicePath \"\"" Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.330777 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64f8f69b7-bnncp_88c5b877-feea-49a3-b528-c24d46500a36/console/0.log" Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.330865 33867 generic.go:334] "Generic (PLEG): container finished" podID="88c5b877-feea-49a3-b528-c24d46500a36" containerID="30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023" exitCode=2 Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.330911 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f8f69b7-bnncp" event={"ID":"88c5b877-feea-49a3-b528-c24d46500a36","Type":"ContainerDied","Data":"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023"} Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.330974 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f8f69b7-bnncp" event={"ID":"88c5b877-feea-49a3-b528-c24d46500a36","Type":"ContainerDied","Data":"94aefd7b1ea0ac892a63a0725c225f1002534797f4efac47f9a65eb4865b86f8"} Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.330979 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f8f69b7-bnncp" Feb 19 03:29:49.331306 master-0 kubenswrapper[33867]: I0219 03:29:49.331038 33867 scope.go:117] "RemoveContainer" containerID="30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023" Feb 19 03:29:49.355631 master-0 kubenswrapper[33867]: I0219 03:29:49.354326 33867 scope.go:117] "RemoveContainer" containerID="30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023" Feb 19 03:29:49.356185 master-0 kubenswrapper[33867]: E0219 03:29:49.356054 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023\": container with ID starting with 30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023 not found: ID does not exist" containerID="30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023" Feb 19 03:29:49.356185 master-0 kubenswrapper[33867]: I0219 03:29:49.356115 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023"} err="failed to get container status \"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023\": rpc error: code = NotFound desc = could not find container \"30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023\": container with ID starting with 30069f86dc94aa90a2bcc573bc491991052a5fa1c58b61c375f81e9b5ab5b023 not found: ID does not exist" Feb 19 03:29:49.367196 master-0 kubenswrapper[33867]: I0219 03:29:49.367132 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:29:49.372485 master-0 kubenswrapper[33867]: I0219 03:29:49.372405 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64f8f69b7-bnncp"] Feb 19 03:29:50.965348 master-0 kubenswrapper[33867]: I0219 03:29:50.965271 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c5b877-feea-49a3-b528-c24d46500a36" path="/var/lib/kubelet/pods/88c5b877-feea-49a3-b528-c24d46500a36/volumes" Feb 19 03:30:00.175207 master-0 kubenswrapper[33867]: I0219 03:30:00.175104 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9"] Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: E0219 03:30:00.175566 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: I0219 03:30:00.175588 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: E0219 03:30:00.175616 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: I0219 03:30:00.175626 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: I0219 03:30:00.175802 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b0e9bf-7094-43f4-9904-aa27aa9d7b9a" containerName="console" Feb 19 03:30:00.175910 master-0 kubenswrapper[33867]: I0219 03:30:00.175878 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c5b877-feea-49a3-b528-c24d46500a36" containerName="console" Feb 19 03:30:00.176710 master-0 kubenswrapper[33867]: I0219 03:30:00.176633 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.180881 master-0 kubenswrapper[33867]: I0219 03:30:00.180826 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-7hhvr" Feb 19 03:30:00.181136 master-0 kubenswrapper[33867]: I0219 03:30:00.181111 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 03:30:00.205196 master-0 kubenswrapper[33867]: I0219 03:30:00.205100 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9"] Feb 19 03:30:00.303907 master-0 kubenswrapper[33867]: I0219 03:30:00.303841 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.304131 master-0 kubenswrapper[33867]: I0219 03:30:00.304013 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.304549 master-0 kubenswrapper[33867]: I0219 03:30:00.304477 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfnnc\" (UniqueName: \"kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.406927 master-0 kubenswrapper[33867]: I0219 03:30:00.406824 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfnnc\" (UniqueName: \"kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.407447 master-0 kubenswrapper[33867]: I0219 03:30:00.407215 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.407447 master-0 kubenswrapper[33867]: I0219 03:30:00.407387 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.408623 master-0 kubenswrapper[33867]: I0219 03:30:00.408570 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.414944 master-0 kubenswrapper[33867]: I0219 03:30:00.414890 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.424080 master-0 kubenswrapper[33867]: I0219 03:30:00.424011 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfnnc\" (UniqueName: \"kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc\") pod \"collect-profiles-29524530-klfz9\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.507018 master-0 kubenswrapper[33867]: I0219 03:30:00.506839 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:00.965024 master-0 kubenswrapper[33867]: I0219 03:30:00.964966 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9"] Feb 19 03:30:00.968449 master-0 kubenswrapper[33867]: W0219 03:30:00.968409 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f94280_79b4_4b3b_8e98_ebf5876d035f.slice/crio-618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b WatchSource:0}: Error finding container 618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b: Status 404 returned error can't find the container with id 618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b Feb 19 03:30:01.445284 master-0 kubenswrapper[33867]: I0219 03:30:01.445187 33867 generic.go:334] "Generic (PLEG): container finished" podID="a0f94280-79b4-4b3b-8e98-ebf5876d035f" containerID="e4b2edc1538005a14ad9e99e5714bc168a698695048f4ebed34230f2b6786f4f" exitCode=0 Feb 19 03:30:01.445918 master-0 kubenswrapper[33867]: I0219 03:30:01.445296 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" event={"ID":"a0f94280-79b4-4b3b-8e98-ebf5876d035f","Type":"ContainerDied","Data":"e4b2edc1538005a14ad9e99e5714bc168a698695048f4ebed34230f2b6786f4f"} Feb 19 03:30:01.445918 master-0 kubenswrapper[33867]: I0219 03:30:01.445341 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" event={"ID":"a0f94280-79b4-4b3b-8e98-ebf5876d035f","Type":"ContainerStarted","Data":"618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b"} Feb 19 03:30:02.789948 master-0 kubenswrapper[33867]: I0219 03:30:02.789878 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:02.852331 master-0 kubenswrapper[33867]: I0219 03:30:02.850174 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume\") pod \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " Feb 19 03:30:02.852331 master-0 kubenswrapper[33867]: I0219 03:30:02.850543 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfnnc\" (UniqueName: \"kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc\") pod \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " Feb 19 03:30:02.852331 master-0 kubenswrapper[33867]: I0219 03:30:02.850712 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume\") pod \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\" (UID: \"a0f94280-79b4-4b3b-8e98-ebf5876d035f\") " Feb 19 03:30:02.852331 master-0 kubenswrapper[33867]: I0219 03:30:02.851633 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0f94280-79b4-4b3b-8e98-ebf5876d035f" (UID: "a0f94280-79b4-4b3b-8e98-ebf5876d035f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:30:02.853327 master-0 kubenswrapper[33867]: I0219 03:30:02.853287 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0f94280-79b4-4b3b-8e98-ebf5876d035f" (UID: "a0f94280-79b4-4b3b-8e98-ebf5876d035f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:30:02.853664 master-0 kubenswrapper[33867]: I0219 03:30:02.853600 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc" (OuterVolumeSpecName: "kube-api-access-bfnnc") pod "a0f94280-79b4-4b3b-8e98-ebf5876d035f" (UID: "a0f94280-79b4-4b3b-8e98-ebf5876d035f"). InnerVolumeSpecName "kube-api-access-bfnnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:30:02.952516 master-0 kubenswrapper[33867]: I0219 03:30:02.952439 33867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f94280-79b4-4b3b-8e98-ebf5876d035f-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:30:02.952516 master-0 kubenswrapper[33867]: I0219 03:30:02.952512 33867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f94280-79b4-4b3b-8e98-ebf5876d035f-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:30:02.952516 master-0 kubenswrapper[33867]: I0219 03:30:02.952527 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfnnc\" (UniqueName: \"kubernetes.io/projected/a0f94280-79b4-4b3b-8e98-ebf5876d035f-kube-api-access-bfnnc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:30:03.463313 master-0 kubenswrapper[33867]: I0219 03:30:03.463207 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" event={"ID":"a0f94280-79b4-4b3b-8e98-ebf5876d035f","Type":"ContainerDied","Data":"618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b"} Feb 19 03:30:03.463313 master-0 kubenswrapper[33867]: I0219 03:30:03.463319 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9" Feb 19 03:30:03.463827 master-0 kubenswrapper[33867]: I0219 03:30:03.463332 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618384d53e83ca1ab29af0ba36ac1c0bdd065804b352d48b48bec8b1b4415d7b" Feb 19 03:30:38.947010 master-0 kubenswrapper[33867]: I0219 03:30:38.946933 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:30:38.947934 master-0 kubenswrapper[33867]: E0219 03:30:38.947531 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f94280-79b4-4b3b-8e98-ebf5876d035f" containerName="collect-profiles" Feb 19 03:30:38.947934 master-0 kubenswrapper[33867]: I0219 03:30:38.947551 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f94280-79b4-4b3b-8e98-ebf5876d035f" containerName="collect-profiles" Feb 19 03:30:38.947934 master-0 kubenswrapper[33867]: I0219 03:30:38.947742 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f94280-79b4-4b3b-8e98-ebf5876d035f" containerName="collect-profiles" Feb 19 03:30:38.950917 master-0 kubenswrapper[33867]: I0219 03:30:38.948367 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:38.951619 master-0 kubenswrapper[33867]: I0219 03:30:38.951586 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 19 03:30:38.951826 master-0 kubenswrapper[33867]: I0219 03:30:38.951750 33867 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 19 03:30:38.951897 master-0 kubenswrapper[33867]: I0219 03:30:38.951844 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 19 03:30:38.955680 master-0 kubenswrapper[33867]: I0219 03:30:38.955622 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 19 03:30:38.973558 master-0 kubenswrapper[33867]: I0219 03:30:38.973490 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:30:39.023196 master-0 kubenswrapper[33867]: I0219 03:30:39.023091 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.023612 master-0 kubenswrapper[33867]: I0219 03:30:39.023374 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk4f6\" (UniqueName: \"kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.023691 master-0 kubenswrapper[33867]: I0219 03:30:39.023660 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.126279 master-0 kubenswrapper[33867]: I0219 03:30:39.126191 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk4f6\" (UniqueName: \"kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.126279 master-0 kubenswrapper[33867]: I0219 03:30:39.126282 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.126624 master-0 kubenswrapper[33867]: I0219 03:30:39.126362 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.127963 master-0 kubenswrapper[33867]: I0219 03:30:39.127790 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.131822 master-0 kubenswrapper[33867]: I0219 03:30:39.131761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.147989 master-0 kubenswrapper[33867]: I0219 03:30:39.147935 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk4f6\" (UniqueName: \"kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6\") pod \"sushy-emulator-58f4c9b998-vvmrg\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.286921 master-0 kubenswrapper[33867]: I0219 03:30:39.286793 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:39.706294 master-0 kubenswrapper[33867]: I0219 03:30:39.706179 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:30:39.708075 master-0 kubenswrapper[33867]: W0219 03:30:39.708021 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9123e1_da52_4f76_96e7_d5a2712ed958.slice/crio-ec75c7e08b062744e342832e73a5450840f792f54f57db5874eb0e8882851b28 WatchSource:0}: Error finding container ec75c7e08b062744e342832e73a5450840f792f54f57db5874eb0e8882851b28: Status 404 returned error can't find the container with id ec75c7e08b062744e342832e73a5450840f792f54f57db5874eb0e8882851b28 Feb 19 03:30:39.711242 master-0 kubenswrapper[33867]: I0219 03:30:39.711201 33867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:30:39.778149 master-0 kubenswrapper[33867]: I0219 03:30:39.778092 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" event={"ID":"bb9123e1-da52-4f76-96e7-d5a2712ed958","Type":"ContainerStarted","Data":"ec75c7e08b062744e342832e73a5450840f792f54f57db5874eb0e8882851b28"} Feb 19 03:30:48.869203 master-0 kubenswrapper[33867]: I0219 03:30:48.866416 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" event={"ID":"bb9123e1-da52-4f76-96e7-d5a2712ed958","Type":"ContainerStarted","Data":"c9cc25a0d7ddccc531061c4d90bb3f93027fe259f1703cefb032858b876e74ff"} Feb 19 03:30:48.893981 master-0 kubenswrapper[33867]: I0219 03:30:48.893700 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" podStartSLOduration=1.9512642040000001 podStartE2EDuration="10.893670655s" podCreationTimestamp="2026-02-19 03:30:38 +0000 UTC" firstStartedPulling="2026-02-19 03:30:39.711106359 +0000 UTC m=+445.007776970" lastFinishedPulling="2026-02-19 03:30:48.65351281 +0000 UTC m=+453.950183421" observedRunningTime="2026-02-19 03:30:48.887951213 +0000 UTC m=+454.184621824" watchObservedRunningTime="2026-02-19 03:30:48.893670655 +0000 UTC m=+454.190341266" Feb 19 03:30:49.287721 master-0 kubenswrapper[33867]: I0219 03:30:49.287588 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:49.287721 master-0 kubenswrapper[33867]: I0219 03:30:49.287727 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:49.292343 master-0 kubenswrapper[33867]: I0219 03:30:49.292242 33867 prober.go:107] "Probe failed" probeType="Startup" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerName="sushy-emulator" probeResult="failure" output="Get \"http://10.128.0.118:8000/redfish/v1\": dial tcp 10.128.0.118:8000: connect: connection refused" Feb 19 03:30:59.296285 master-0 kubenswrapper[33867]: I0219 03:30:59.296206 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:30:59.299667 master-0 kubenswrapper[33867]: I0219 03:30:59.299635 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:31:18.102413 master-0 kubenswrapper[33867]: I0219 03:31:18.102241 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-7f9d8556b9-mbclm"] Feb 19 03:31:18.104551 master-0 kubenswrapper[33867]: I0219 03:31:18.104512 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.119029 master-0 kubenswrapper[33867]: I0219 03:31:18.118925 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-7f9d8556b9-mbclm"] Feb 19 03:31:18.123048 master-0 kubenswrapper[33867]: I0219 03:31:18.122999 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8bf7d020-f656-4a54-afee-ebe1f6451379-os-client-config\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.123399 master-0 kubenswrapper[33867]: I0219 03:31:18.123368 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4v77\" (UniqueName: \"kubernetes.io/projected/8bf7d020-f656-4a54-afee-ebe1f6451379-kube-api-access-l4v77\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.224468 master-0 kubenswrapper[33867]: I0219 03:31:18.224273 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4v77\" (UniqueName: \"kubernetes.io/projected/8bf7d020-f656-4a54-afee-ebe1f6451379-kube-api-access-l4v77\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.224468 master-0 kubenswrapper[33867]: I0219 03:31:18.224401 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8bf7d020-f656-4a54-afee-ebe1f6451379-os-client-config\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.231589 master-0 kubenswrapper[33867]: I0219 03:31:18.231110 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8bf7d020-f656-4a54-afee-ebe1f6451379-os-client-config\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.242400 master-0 kubenswrapper[33867]: I0219 03:31:18.241548 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4v77\" (UniqueName: \"kubernetes.io/projected/8bf7d020-f656-4a54-afee-ebe1f6451379-kube-api-access-l4v77\") pod \"nova-console-poller-7f9d8556b9-mbclm\" (UID: \"8bf7d020-f656-4a54-afee-ebe1f6451379\") " pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.469769 master-0 kubenswrapper[33867]: I0219 03:31:18.469622 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" Feb 19 03:31:18.898478 master-0 kubenswrapper[33867]: I0219 03:31:18.898407 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-7f9d8556b9-mbclm"] Feb 19 03:31:18.902821 master-0 kubenswrapper[33867]: W0219 03:31:18.902788 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bf7d020_f656_4a54_afee_ebe1f6451379.slice/crio-77c0b04f24d2b70d7db0b49cacab60fae2176c47b025e51437d6cc993e7e23b8 WatchSource:0}: Error finding container 77c0b04f24d2b70d7db0b49cacab60fae2176c47b025e51437d6cc993e7e23b8: Status 404 returned error can't find the container with id 77c0b04f24d2b70d7db0b49cacab60fae2176c47b025e51437d6cc993e7e23b8 Feb 19 03:31:19.122436 master-0 kubenswrapper[33867]: I0219 03:31:19.122279 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" event={"ID":"8bf7d020-f656-4a54-afee-ebe1f6451379","Type":"ContainerStarted","Data":"77c0b04f24d2b70d7db0b49cacab60fae2176c47b025e51437d6cc993e7e23b8"} Feb 19 03:31:24.192706 master-0 kubenswrapper[33867]: I0219 03:31:24.192566 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" event={"ID":"8bf7d020-f656-4a54-afee-ebe1f6451379","Type":"ContainerStarted","Data":"e16c6b8afc12a279f93024e029eeaae4d43b160feed0f6e7a718b90004050a0e"} Feb 19 03:31:25.206435 master-0 kubenswrapper[33867]: I0219 03:31:25.206361 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" event={"ID":"8bf7d020-f656-4a54-afee-ebe1f6451379","Type":"ContainerStarted","Data":"0dde5f5df0c2171f004994b497313cd734e113020f59296ec28b2d24f86a77d0"} Feb 19 03:31:25.227039 master-0 kubenswrapper[33867]: I0219 03:31:25.226954 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-7f9d8556b9-mbclm" podStartSLOduration=1.906885076 podStartE2EDuration="7.226935424s" podCreationTimestamp="2026-02-19 03:31:18 +0000 UTC" firstStartedPulling="2026-02-19 03:31:18.905181721 +0000 UTC m=+484.201852332" lastFinishedPulling="2026-02-19 03:31:24.225232069 +0000 UTC m=+489.521902680" observedRunningTime="2026-02-19 03:31:25.226530902 +0000 UTC m=+490.523201523" watchObservedRunningTime="2026-02-19 03:31:25.226935424 +0000 UTC m=+490.523606035" Feb 19 03:31:44.191771 master-0 kubenswrapper[33867]: I0219 03:31:44.191671 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 19 03:31:44.193581 master-0 kubenswrapper[33867]: I0219 03:31:44.193505 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.197772 master-0 kubenswrapper[33867]: I0219 03:31:44.197704 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 03:31:44.197772 master-0 kubenswrapper[33867]: I0219 03:31:44.197704 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dcb4l" Feb 19 03:31:44.201577 master-0 kubenswrapper[33867]: I0219 03:31:44.201485 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 19 03:31:44.214525 master-0 kubenswrapper[33867]: I0219 03:31:44.213530 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.214525 master-0 kubenswrapper[33867]: I0219 03:31:44.213719 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.214525 master-0 kubenswrapper[33867]: I0219 03:31:44.213749 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.318663 master-0 kubenswrapper[33867]: I0219 03:31:44.316461 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.318663 master-0 kubenswrapper[33867]: I0219 03:31:44.316534 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.318663 master-0 kubenswrapper[33867]: I0219 03:31:44.316673 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.318663 master-0 kubenswrapper[33867]: I0219 03:31:44.316727 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.318663 master-0 kubenswrapper[33867]: I0219 03:31:44.316913 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.335036 master-0 kubenswrapper[33867]: I0219 03:31:44.334961 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.520753 master-0 kubenswrapper[33867]: I0219 03:31:44.520574 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:31:44.982877 master-0 kubenswrapper[33867]: I0219 03:31:44.982820 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 19 03:31:44.986439 master-0 kubenswrapper[33867]: W0219 03:31:44.986371 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3a995b93_5ab4_469e_b86c_6b9be83fe5d5.slice/crio-1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674 WatchSource:0}: Error finding container 1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674: Status 404 returned error can't find the container with id 1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674 Feb 19 03:31:45.384411 master-0 kubenswrapper[33867]: I0219 03:31:45.384334 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3a995b93-5ab4-469e-b86c-6b9be83fe5d5","Type":"ContainerStarted","Data":"1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674"} Feb 19 03:31:46.395348 master-0 kubenswrapper[33867]: I0219 03:31:46.395268 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3a995b93-5ab4-469e-b86c-6b9be83fe5d5","Type":"ContainerStarted","Data":"5a1c84f2d4af06628053995a68399da23535a0d6b08484d67314be8b8ebc1fac"} Feb 19 03:31:46.420166 master-0 kubenswrapper[33867]: I0219 03:31:46.420075 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.420049407 podStartE2EDuration="2.420049407s" podCreationTimestamp="2026-02-19 03:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:31:46.412461472 +0000 UTC m=+511.709132083" watchObservedRunningTime="2026-02-19 03:31:46.420049407 +0000 UTC m=+511.716720028" Feb 19 03:31:50.418548 master-0 kubenswrapper[33867]: I0219 03:31:50.418457 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-95dbc66df-td4h6"] Feb 19 03:31:50.420620 master-0 kubenswrapper[33867]: I0219 03:31:50.420564 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.430605 master-0 kubenswrapper[33867]: I0219 03:31:50.430527 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f83d6352-a8d8-4f38-898b-37646b7f5251-os-client-config\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.474400 master-0 kubenswrapper[33867]: I0219 03:31:50.474318 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-95dbc66df-td4h6"] Feb 19 03:31:50.533610 master-0 kubenswrapper[33867]: I0219 03:31:50.533516 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f83d6352-a8d8-4f38-898b-37646b7f5251-nova-console-recordings-pv\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.533881 master-0 kubenswrapper[33867]: I0219 03:31:50.533701 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f83d6352-a8d8-4f38-898b-37646b7f5251-os-client-config\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.533881 master-0 kubenswrapper[33867]: I0219 03:31:50.533763 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmzr\" (UniqueName: \"kubernetes.io/projected/f83d6352-a8d8-4f38-898b-37646b7f5251-kube-api-access-ggmzr\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.537912 master-0 kubenswrapper[33867]: I0219 03:31:50.537828 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f83d6352-a8d8-4f38-898b-37646b7f5251-os-client-config\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.635843 master-0 kubenswrapper[33867]: I0219 03:31:50.635767 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f83d6352-a8d8-4f38-898b-37646b7f5251-nova-console-recordings-pv\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.636101 master-0 kubenswrapper[33867]: I0219 03:31:50.635919 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmzr\" (UniqueName: \"kubernetes.io/projected/f83d6352-a8d8-4f38-898b-37646b7f5251-kube-api-access-ggmzr\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:50.654250 master-0 kubenswrapper[33867]: I0219 03:31:50.654178 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmzr\" (UniqueName: \"kubernetes.io/projected/f83d6352-a8d8-4f38-898b-37646b7f5251-kube-api-access-ggmzr\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:51.381322 master-0 kubenswrapper[33867]: I0219 03:31:51.381222 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f83d6352-a8d8-4f38-898b-37646b7f5251-nova-console-recordings-pv\") pod \"nova-console-recorder-95dbc66df-td4h6\" (UID: \"f83d6352-a8d8-4f38-898b-37646b7f5251\") " pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:51.642008 master-0 kubenswrapper[33867]: I0219 03:31:51.641913 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" Feb 19 03:31:52.190107 master-0 kubenswrapper[33867]: W0219 03:31:52.190049 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf83d6352_a8d8_4f38_898b_37646b7f5251.slice/crio-a006c6b2823f6c7921d7885655c9149485c4d774e1b8a81979994882ba2d273d WatchSource:0}: Error finding container a006c6b2823f6c7921d7885655c9149485c4d774e1b8a81979994882ba2d273d: Status 404 returned error can't find the container with id a006c6b2823f6c7921d7885655c9149485c4d774e1b8a81979994882ba2d273d Feb 19 03:31:52.191434 master-0 kubenswrapper[33867]: I0219 03:31:52.191377 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-95dbc66df-td4h6"] Feb 19 03:31:52.458505 master-0 kubenswrapper[33867]: I0219 03:31:52.458288 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" event={"ID":"f83d6352-a8d8-4f38-898b-37646b7f5251","Type":"ContainerStarted","Data":"a006c6b2823f6c7921d7885655c9149485c4d774e1b8a81979994882ba2d273d"} Feb 19 03:32:04.567047 master-0 kubenswrapper[33867]: I0219 03:32:04.566960 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" event={"ID":"f83d6352-a8d8-4f38-898b-37646b7f5251","Type":"ContainerStarted","Data":"367ab6c5db1e879d05d3796223ff86a265e15479adc6e2908b750ea4cc49955e"} Feb 19 03:32:05.577303 master-0 kubenswrapper[33867]: I0219 03:32:05.577196 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" event={"ID":"f83d6352-a8d8-4f38-898b-37646b7f5251","Type":"ContainerStarted","Data":"8601ab03451779fe1249751d0451199c3244c409de9cabff59b9e874033b7d9c"} Feb 19 03:32:05.646350 master-0 kubenswrapper[33867]: I0219 03:32:05.646208 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-95dbc66df-td4h6" podStartSLOduration=3.265471618 podStartE2EDuration="15.646176344s" podCreationTimestamp="2026-02-19 03:31:50 +0000 UTC" firstStartedPulling="2026-02-19 03:31:52.193842983 +0000 UTC m=+517.490513594" lastFinishedPulling="2026-02-19 03:32:04.574547709 +0000 UTC m=+529.871218320" observedRunningTime="2026-02-19 03:32:05.61955639 +0000 UTC m=+530.916227051" watchObservedRunningTime="2026-02-19 03:32:05.646176344 +0000 UTC m=+530.942846955" Feb 19 03:32:18.296349 master-0 kubenswrapper[33867]: I0219 03:32:18.296218 33867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:32:18.297670 master-0 kubenswrapper[33867]: I0219 03:32:18.296621 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="cluster-policy-controller" containerID="cri-o://15f86a7f9af00cfb660aed0d9de16b4b7b16e42980616991e94ef7198de70052" gracePeriod=30 Feb 19 03:32:18.297670 master-0 kubenswrapper[33867]: I0219 03:32:18.296773 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://41e516f80fdbca0ad0fec8609d99373a6c87f6ed69e42cdc14dde997afd65da8" gracePeriod=30 Feb 19 03:32:18.297670 master-0 kubenswrapper[33867]: I0219 03:32:18.296845 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://2a6378171c9d7e861384ea33ac96d97796e6dcd51640a45f1e13d7b30275860c" gracePeriod=30 Feb 19 03:32:18.297670 master-0 kubenswrapper[33867]: I0219 03:32:18.297161 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" containerID="cri-o://2bfdd08c2f9d5dd55aca73518d58b45204430b97a64cd8f23d4d0084858c4cc5" gracePeriod=30 Feb 19 03:32:18.298920 master-0 kubenswrapper[33867]: I0219 03:32:18.298807 33867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:32:18.299689 master-0 kubenswrapper[33867]: E0219 03:32:18.299650 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.299689 master-0 kubenswrapper[33867]: I0219 03:32:18.299677 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: E0219 03:32:18.299716 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-cert-syncer" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: I0219 03:32:18.299727 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-cert-syncer" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: E0219 03:32:18.299747 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="cluster-policy-controller" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: I0219 03:32:18.299755 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="cluster-policy-controller" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: E0219 03:32:18.299776 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-recovery-controller" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: I0219 03:32:18.299787 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-recovery-controller" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: E0219 03:32:18.299827 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.299888 master-0 kubenswrapper[33867]: I0219 03:32:18.299836 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300056 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="cluster-policy-controller" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300070 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300085 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300105 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-recovery-controller" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300144 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager-cert-syncer" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: E0219 03:32:18.300396 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.300447 master-0 kubenswrapper[33867]: I0219 03:32:18.300411 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.300871 master-0 kubenswrapper[33867]: I0219 03:32:18.300653 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d93c932fb6b580283b25f4adc52bd3" containerName="kube-controller-manager" Feb 19 03:32:18.442997 master-0 kubenswrapper[33867]: I0219 03:32:18.442911 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.443158 master-0 kubenswrapper[33867]: I0219 03:32:18.443041 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.505516 master-0 kubenswrapper[33867]: I0219 03:32:18.505451 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/1.log" Feb 19 03:32:18.507117 master-0 kubenswrapper[33867]: I0219 03:32:18.507050 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager-cert-syncer/0.log" Feb 19 03:32:18.507808 master-0 kubenswrapper[33867]: I0219 03:32:18.507755 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.513156 master-0 kubenswrapper[33867]: I0219 03:32:18.513060 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="54d93c932fb6b580283b25f4adc52bd3" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" Feb 19 03:32:18.544750 master-0 kubenswrapper[33867]: I0219 03:32:18.544646 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.545188 master-0 kubenswrapper[33867]: I0219 03:32:18.544808 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.545188 master-0 kubenswrapper[33867]: I0219 03:32:18.544996 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.545188 master-0 kubenswrapper[33867]: I0219 03:32:18.545040 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc97ac7ee5c51b2cd35ec472c28b60-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c1dc97ac7ee5c51b2cd35ec472c28b60\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.646050 master-0 kubenswrapper[33867]: I0219 03:32:18.645945 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir\") pod \"54d93c932fb6b580283b25f4adc52bd3\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " Feb 19 03:32:18.646432 master-0 kubenswrapper[33867]: I0219 03:32:18.646100 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir\") pod \"54d93c932fb6b580283b25f4adc52bd3\" (UID: \"54d93c932fb6b580283b25f4adc52bd3\") " Feb 19 03:32:18.646432 master-0 kubenswrapper[33867]: I0219 03:32:18.646123 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "54d93c932fb6b580283b25f4adc52bd3" (UID: "54d93c932fb6b580283b25f4adc52bd3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:32:18.646432 master-0 kubenswrapper[33867]: I0219 03:32:18.646321 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "54d93c932fb6b580283b25f4adc52bd3" (UID: "54d93c932fb6b580283b25f4adc52bd3"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:32:18.646776 master-0 kubenswrapper[33867]: I0219 03:32:18.646747 33867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:32:18.646776 master-0 kubenswrapper[33867]: I0219 03:32:18.646770 33867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/54d93c932fb6b580283b25f4adc52bd3-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:32:18.687722 master-0 kubenswrapper[33867]: I0219 03:32:18.687626 33867 generic.go:334] "Generic (PLEG): container finished" podID="3a995b93-5ab4-469e-b86c-6b9be83fe5d5" containerID="5a1c84f2d4af06628053995a68399da23535a0d6b08484d67314be8b8ebc1fac" exitCode=0 Feb 19 03:32:18.688143 master-0 kubenswrapper[33867]: I0219 03:32:18.687718 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3a995b93-5ab4-469e-b86c-6b9be83fe5d5","Type":"ContainerDied","Data":"5a1c84f2d4af06628053995a68399da23535a0d6b08484d67314be8b8ebc1fac"} Feb 19 03:32:18.689869 master-0 kubenswrapper[33867]: I0219 03:32:18.689840 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager/1.log" Feb 19 03:32:18.691767 master-0 kubenswrapper[33867]: I0219 03:32:18.691624 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager-cert-syncer/0.log" Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692535 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="2bfdd08c2f9d5dd55aca73518d58b45204430b97a64cd8f23d4d0084858c4cc5" exitCode=0 Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692616 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="2a6378171c9d7e861384ea33ac96d97796e6dcd51640a45f1e13d7b30275860c" exitCode=0 Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692626 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="41e516f80fdbca0ad0fec8609d99373a6c87f6ed69e42cdc14dde997afd65da8" exitCode=2 Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692641 33867 generic.go:334] "Generic (PLEG): container finished" podID="54d93c932fb6b580283b25f4adc52bd3" containerID="15f86a7f9af00cfb660aed0d9de16b4b7b16e42980616991e94ef7198de70052" exitCode=0 Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692729 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bff094ebb7f12391127b312dadf80a6b3c7978c494062056f5c36d42b113185" Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692752 33867 scope.go:117] "RemoveContainer" containerID="09f0b5969371f66538342dedda74c378b1cb44c85d4e06d88d9e1246f1a72062" Feb 19 03:32:18.701900 master-0 kubenswrapper[33867]: I0219 03:32:18.692854 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:18.712941 master-0 kubenswrapper[33867]: I0219 03:32:18.712811 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="54d93c932fb6b580283b25f4adc52bd3" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" Feb 19 03:32:18.727898 master-0 kubenswrapper[33867]: I0219 03:32:18.727804 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="54d93c932fb6b580283b25f4adc52bd3" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" Feb 19 03:32:18.978467 master-0 kubenswrapper[33867]: I0219 03:32:18.978365 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d93c932fb6b580283b25f4adc52bd3" path="/var/lib/kubelet/pods/54d93c932fb6b580283b25f4adc52bd3/volumes" Feb 19 03:32:19.704529 master-0 kubenswrapper[33867]: I0219 03:32:19.704467 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_54d93c932fb6b580283b25f4adc52bd3/kube-controller-manager-cert-syncer/0.log" Feb 19 03:32:20.016980 master-0 kubenswrapper[33867]: I0219 03:32:20.016875 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:32:20.173678 master-0 kubenswrapper[33867]: I0219 03:32:20.173591 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock\") pod \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " Feb 19 03:32:20.174048 master-0 kubenswrapper[33867]: I0219 03:32:20.173754 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir\") pod \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " Feb 19 03:32:20.174048 master-0 kubenswrapper[33867]: I0219 03:32:20.173831 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access\") pod \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\" (UID: \"3a995b93-5ab4-469e-b86c-6b9be83fe5d5\") " Feb 19 03:32:20.174048 master-0 kubenswrapper[33867]: I0219 03:32:20.173750 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a995b93-5ab4-469e-b86c-6b9be83fe5d5" (UID: "3a995b93-5ab4-469e-b86c-6b9be83fe5d5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:32:20.174048 master-0 kubenswrapper[33867]: I0219 03:32:20.173786 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3a995b93-5ab4-469e-b86c-6b9be83fe5d5" (UID: "3a995b93-5ab4-469e-b86c-6b9be83fe5d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:32:20.174279 master-0 kubenswrapper[33867]: I0219 03:32:20.174236 33867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:32:20.174279 master-0 kubenswrapper[33867]: I0219 03:32:20.174274 33867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 19 03:32:20.178222 master-0 kubenswrapper[33867]: I0219 03:32:20.178135 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a995b93-5ab4-469e-b86c-6b9be83fe5d5" (UID: "3a995b93-5ab4-469e-b86c-6b9be83fe5d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:32:20.276118 master-0 kubenswrapper[33867]: I0219 03:32:20.275934 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a995b93-5ab4-469e-b86c-6b9be83fe5d5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 19 03:32:20.726449 master-0 kubenswrapper[33867]: I0219 03:32:20.726337 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3a995b93-5ab4-469e-b86c-6b9be83fe5d5","Type":"ContainerDied","Data":"1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674"} Feb 19 03:32:20.726449 master-0 kubenswrapper[33867]: I0219 03:32:20.726413 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c42f4b56340d0d48f8cf2f0de401ea1eefcb625dcf79a537b81aa2de23f9674" Feb 19 03:32:20.726449 master-0 kubenswrapper[33867]: I0219 03:32:20.726427 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 19 03:32:31.955533 master-0 kubenswrapper[33867]: I0219 03:32:31.955418 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:31.972115 master-0 kubenswrapper[33867]: I0219 03:32:31.972068 33867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a8bc6de6-4f6d-4053-ab88-65d0fe32ee9a" Feb 19 03:32:31.972313 master-0 kubenswrapper[33867]: I0219 03:32:31.972299 33867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a8bc6de6-4f6d-4053-ab88-65d0fe32ee9a" Feb 19 03:32:31.988813 master-0 kubenswrapper[33867]: I0219 03:32:31.988718 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:32:31.989469 master-0 kubenswrapper[33867]: I0219 03:32:31.989431 33867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:32.001946 master-0 kubenswrapper[33867]: I0219 03:32:32.001868 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:32:32.004388 master-0 kubenswrapper[33867]: I0219 03:32:32.004360 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:32.014147 master-0 kubenswrapper[33867]: I0219 03:32:32.014077 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 19 03:32:32.031571 master-0 kubenswrapper[33867]: W0219 03:32:32.031513 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1dc97ac7ee5c51b2cd35ec472c28b60.slice/crio-cf4ea21f906b98d5252c78a693976f0a1a231da8ff6103ff056e3583fa6bbac2 WatchSource:0}: Error finding container cf4ea21f906b98d5252c78a693976f0a1a231da8ff6103ff056e3583fa6bbac2: Status 404 returned error can't find the container with id cf4ea21f906b98d5252c78a693976f0a1a231da8ff6103ff056e3583fa6bbac2 Feb 19 03:32:32.841717 master-0 kubenswrapper[33867]: I0219 03:32:32.841590 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"1dc84dcc84d2ac1c414b0c61463e98257ba8ff2caefc952e7bb567edc046982b"} Feb 19 03:32:32.841717 master-0 kubenswrapper[33867]: I0219 03:32:32.841665 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"fe3e92768a268319573a5d7eac787f86186b38e30dbe6eb70fd366400c4e23c9"} Feb 19 03:32:32.841717 master-0 kubenswrapper[33867]: I0219 03:32:32.841687 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"67ceb0d4577d61c1cc3c2d1b70f06602fc880ddd63c34525cd168edb2cda5bc6"} Feb 19 03:32:32.841717 master-0 kubenswrapper[33867]: I0219 03:32:32.841702 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"cf4ea21f906b98d5252c78a693976f0a1a231da8ff6103ff056e3583fa6bbac2"} Feb 19 03:32:33.860918 master-0 kubenswrapper[33867]: I0219 03:32:33.860810 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"e5d00ca37f75f20dcdeee512df9411c66ccb05f9525712f5e33ff796bca874f0"} Feb 19 03:32:33.887096 master-0 kubenswrapper[33867]: I0219 03:32:33.886989 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.886961126 podStartE2EDuration="2.886961126s" podCreationTimestamp="2026-02-19 03:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:32:33.88394538 +0000 UTC m=+559.180616031" watchObservedRunningTime="2026-02-19 03:32:33.886961126 +0000 UTC m=+559.183631737" Feb 19 03:32:42.005414 master-0 kubenswrapper[33867]: I0219 03:32:42.005007 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:42.005414 master-0 kubenswrapper[33867]: I0219 03:32:42.005075 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:42.005414 master-0 kubenswrapper[33867]: I0219 03:32:42.005305 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:42.005414 master-0 kubenswrapper[33867]: I0219 03:32:42.005316 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:42.007554 master-0 kubenswrapper[33867]: I0219 03:32:42.007495 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:32:42.007727 master-0 kubenswrapper[33867]: I0219 03:32:42.007691 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:32:42.011673 master-0 kubenswrapper[33867]: I0219 03:32:42.011592 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:42.950910 master-0 kubenswrapper[33867]: I0219 03:32:42.947074 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:32:52.005422 master-0 kubenswrapper[33867]: I0219 03:32:52.005313 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:32:52.005422 master-0 kubenswrapper[33867]: I0219 03:32:52.005403 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:33:02.006509 master-0 kubenswrapper[33867]: I0219 03:33:02.006409 33867 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 19 03:33:02.007133 master-0 kubenswrapper[33867]: I0219 03:33:02.006534 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 19 03:33:02.007133 master-0 kubenswrapper[33867]: I0219 03:33:02.006619 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:33:02.007732 master-0 kubenswrapper[33867]: I0219 03:33:02.007684 33867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"67ceb0d4577d61c1cc3c2d1b70f06602fc880ddd63c34525cd168edb2cda5bc6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 03:33:02.007865 master-0 kubenswrapper[33867]: I0219 03:33:02.007830 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c1dc97ac7ee5c51b2cd35ec472c28b60" containerName="kube-controller-manager" containerID="cri-o://67ceb0d4577d61c1cc3c2d1b70f06602fc880ddd63c34525cd168edb2cda5bc6" gracePeriod=30 Feb 19 03:33:16.876927 master-0 kubenswrapper[33867]: I0219 03:33:16.876828 33867 scope.go:117] "RemoveContainer" containerID="2a6378171c9d7e861384ea33ac96d97796e6dcd51640a45f1e13d7b30275860c" Feb 19 03:33:16.895643 master-0 kubenswrapper[33867]: I0219 03:33:16.895593 33867 scope.go:117] "RemoveContainer" containerID="15f86a7f9af00cfb660aed0d9de16b4b7b16e42980616991e94ef7198de70052" Feb 19 03:33:16.917453 master-0 kubenswrapper[33867]: I0219 03:33:16.917390 33867 scope.go:117] "RemoveContainer" containerID="41e516f80fdbca0ad0fec8609d99373a6c87f6ed69e42cdc14dde997afd65da8" Feb 19 03:33:32.388074 master-0 kubenswrapper[33867]: I0219 03:33:32.387994 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c1dc97ac7ee5c51b2cd35ec472c28b60/kube-controller-manager/0.log" Feb 19 03:33:32.389039 master-0 kubenswrapper[33867]: I0219 03:33:32.388086 33867 generic.go:334] "Generic (PLEG): container finished" podID="c1dc97ac7ee5c51b2cd35ec472c28b60" containerID="67ceb0d4577d61c1cc3c2d1b70f06602fc880ddd63c34525cd168edb2cda5bc6" exitCode=137 Feb 19 03:33:32.389039 master-0 kubenswrapper[33867]: I0219 03:33:32.388150 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerDied","Data":"67ceb0d4577d61c1cc3c2d1b70f06602fc880ddd63c34525cd168edb2cda5bc6"} Feb 19 03:33:33.403139 master-0 kubenswrapper[33867]: I0219 03:33:33.403030 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c1dc97ac7ee5c51b2cd35ec472c28b60/kube-controller-manager/0.log" Feb 19 03:33:33.404200 master-0 kubenswrapper[33867]: I0219 03:33:33.403144 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c1dc97ac7ee5c51b2cd35ec472c28b60","Type":"ContainerStarted","Data":"e467033748e8af5c65e105636c2668093fde97e4b71b6ea6a437c44f1f442e0a"} Feb 19 03:33:42.005046 master-0 kubenswrapper[33867]: I0219 03:33:42.004955 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:33:42.005921 master-0 kubenswrapper[33867]: I0219 03:33:42.005458 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:33:42.010984 master-0 kubenswrapper[33867]: I0219 03:33:42.010944 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:33:42.500702 master-0 kubenswrapper[33867]: I0219 03:33:42.500638 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 19 03:33:49.299863 master-0 kubenswrapper[33867]: I0219 03:33:49.299785 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns"] Feb 19 03:33:49.300504 master-0 kubenswrapper[33867]: E0219 03:33:49.300067 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a995b93-5ab4-469e-b86c-6b9be83fe5d5" containerName="installer" Feb 19 03:33:49.300504 master-0 kubenswrapper[33867]: I0219 03:33:49.300079 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a995b93-5ab4-469e-b86c-6b9be83fe5d5" containerName="installer" Feb 19 03:33:49.300504 master-0 kubenswrapper[33867]: I0219 03:33:49.300242 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a995b93-5ab4-469e-b86c-6b9be83fe5d5" containerName="installer" Feb 19 03:33:49.301265 master-0 kubenswrapper[33867]: I0219 03:33:49.301225 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.304179 master-0 kubenswrapper[33867]: I0219 03:33:49.304115 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vbf8p" Feb 19 03:33:49.321373 master-0 kubenswrapper[33867]: I0219 03:33:49.321199 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns"] Feb 19 03:33:49.475939 master-0 kubenswrapper[33867]: I0219 03:33:49.475835 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.476202 master-0 kubenswrapper[33867]: I0219 03:33:49.475958 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.476202 master-0 kubenswrapper[33867]: I0219 03:33:49.476048 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx45d\" (UniqueName: \"kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.577480 master-0 kubenswrapper[33867]: I0219 03:33:49.577324 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx45d\" (UniqueName: \"kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.577480 master-0 kubenswrapper[33867]: I0219 03:33:49.577469 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.577748 master-0 kubenswrapper[33867]: I0219 03:33:49.577526 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.578132 master-0 kubenswrapper[33867]: I0219 03:33:49.578097 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.578511 master-0 kubenswrapper[33867]: I0219 03:33:49.578423 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.593243 master-0 kubenswrapper[33867]: I0219 03:33:49.593142 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx45d\" (UniqueName: \"kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:49.619205 master-0 kubenswrapper[33867]: I0219 03:33:49.619134 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:50.060660 master-0 kubenswrapper[33867]: I0219 03:33:50.060605 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns"] Feb 19 03:33:50.069925 master-0 kubenswrapper[33867]: W0219 03:33:50.069815 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f3b3e2b_e0a2_4632_89d0_04a1cc4aa4a8.slice/crio-9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0 WatchSource:0}: Error finding container 9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0: Status 404 returned error can't find the container with id 9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0 Feb 19 03:33:50.582010 master-0 kubenswrapper[33867]: I0219 03:33:50.581890 33867 generic.go:334] "Generic (PLEG): container finished" podID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerID="13551ce69d2ffb87393407e31923089e091c652bd2455b6c9a1a6c9f77b61399" exitCode=0 Feb 19 03:33:50.582010 master-0 kubenswrapper[33867]: I0219 03:33:50.581957 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" event={"ID":"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8","Type":"ContainerDied","Data":"13551ce69d2ffb87393407e31923089e091c652bd2455b6c9a1a6c9f77b61399"} Feb 19 03:33:50.582586 master-0 kubenswrapper[33867]: I0219 03:33:50.582027 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" event={"ID":"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8","Type":"ContainerStarted","Data":"9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0"} Feb 19 03:33:52.602818 master-0 kubenswrapper[33867]: I0219 03:33:52.602689 33867 generic.go:334] "Generic (PLEG): container finished" podID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerID="3198e2346001918153a5f3c9a2290bd46e96c0bddfe575b4662f0c58e9a00c90" exitCode=0 Feb 19 03:33:52.602818 master-0 kubenswrapper[33867]: I0219 03:33:52.602758 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" event={"ID":"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8","Type":"ContainerDied","Data":"3198e2346001918153a5f3c9a2290bd46e96c0bddfe575b4662f0c58e9a00c90"} Feb 19 03:33:53.612530 master-0 kubenswrapper[33867]: I0219 03:33:53.612444 33867 generic.go:334] "Generic (PLEG): container finished" podID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerID="f2aa3ad32bb50e5bfea5e35d62e5ada1ac0272d327dcf0c9f6edfd41583f2b9c" exitCode=0 Feb 19 03:33:53.612530 master-0 kubenswrapper[33867]: I0219 03:33:53.612516 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" event={"ID":"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8","Type":"ContainerDied","Data":"f2aa3ad32bb50e5bfea5e35d62e5ada1ac0272d327dcf0c9f6edfd41583f2b9c"} Feb 19 03:33:55.016352 master-0 kubenswrapper[33867]: I0219 03:33:55.016308 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:33:55.076351 master-0 kubenswrapper[33867]: I0219 03:33:55.076206 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx45d\" (UniqueName: \"kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d\") pod \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " Feb 19 03:33:55.076622 master-0 kubenswrapper[33867]: I0219 03:33:55.076385 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util\") pod \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " Feb 19 03:33:55.076673 master-0 kubenswrapper[33867]: I0219 03:33:55.076633 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle\") pod \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\" (UID: \"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8\") " Feb 19 03:33:55.077519 master-0 kubenswrapper[33867]: I0219 03:33:55.077433 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle" (OuterVolumeSpecName: "bundle") pod "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" (UID: "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:33:55.079760 master-0 kubenswrapper[33867]: I0219 03:33:55.079693 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d" (OuterVolumeSpecName: "kube-api-access-lx45d") pod "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" (UID: "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8"). InnerVolumeSpecName "kube-api-access-lx45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:33:55.094721 master-0 kubenswrapper[33867]: I0219 03:33:55.094628 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util" (OuterVolumeSpecName: "util") pod "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" (UID: "4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:33:55.178689 master-0 kubenswrapper[33867]: I0219 03:33:55.178615 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:33:55.178689 master-0 kubenswrapper[33867]: I0219 03:33:55.178676 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:33:55.178689 master-0 kubenswrapper[33867]: I0219 03:33:55.178697 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx45d\" (UniqueName: \"kubernetes.io/projected/4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8-kube-api-access-lx45d\") on node \"master-0\" DevicePath \"\"" Feb 19 03:33:55.650246 master-0 kubenswrapper[33867]: I0219 03:33:55.650183 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" event={"ID":"4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8","Type":"ContainerDied","Data":"9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0"} Feb 19 03:33:55.650246 master-0 kubenswrapper[33867]: I0219 03:33:55.650241 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4f70479592e20da92780cf90624e77a1e94f2075ede1eaf9fcfb64bda218f0" Feb 19 03:33:55.650635 master-0 kubenswrapper[33867]: I0219 03:33:55.650386 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns" Feb 19 03:34:02.498435 master-0 kubenswrapper[33867]: I0219 03:34:02.498338 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2"] Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: E0219 03:34:02.498904 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="extract" Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: I0219 03:34:02.498925 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="extract" Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: E0219 03:34:02.498943 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="pull" Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: I0219 03:34:02.498950 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="pull" Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: E0219 03:34:02.499000 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="util" Feb 19 03:34:02.499169 master-0 kubenswrapper[33867]: I0219 03:34:02.499008 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="util" Feb 19 03:34:02.499375 master-0 kubenswrapper[33867]: I0219 03:34:02.499189 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3b3e2b-e0a2-4632-89d0-04a1cc4aa4a8" containerName="extract" Feb 19 03:34:02.500230 master-0 kubenswrapper[33867]: I0219 03:34:02.500200 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.507350 master-0 kubenswrapper[33867]: I0219 03:34:02.507275 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 19 03:34:02.507881 master-0 kubenswrapper[33867]: I0219 03:34:02.507855 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 19 03:34:02.508062 master-0 kubenswrapper[33867]: I0219 03:34:02.508041 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 19 03:34:02.508288 master-0 kubenswrapper[33867]: I0219 03:34:02.508268 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 19 03:34:02.516555 master-0 kubenswrapper[33867]: I0219 03:34:02.516480 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 19 03:34:02.518358 master-0 kubenswrapper[33867]: I0219 03:34:02.518301 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2"] Feb 19 03:34:02.625643 master-0 kubenswrapper[33867]: I0219 03:34:02.625565 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-socket-dir\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.625643 master-0 kubenswrapper[33867]: I0219 03:34:02.625622 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-metrics-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.625945 master-0 kubenswrapper[33867]: I0219 03:34:02.625688 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-webhook-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.625945 master-0 kubenswrapper[33867]: I0219 03:34:02.625718 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkmg\" (UniqueName: \"kubernetes.io/projected/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-kube-api-access-plkmg\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.625945 master-0 kubenswrapper[33867]: I0219 03:34:02.625738 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-apiservice-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.727636 master-0 kubenswrapper[33867]: I0219 03:34:02.727523 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-webhook-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.728061 master-0 kubenswrapper[33867]: I0219 03:34:02.727878 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkmg\" (UniqueName: \"kubernetes.io/projected/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-kube-api-access-plkmg\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.728061 master-0 kubenswrapper[33867]: I0219 03:34:02.727996 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-apiservice-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.728322 master-0 kubenswrapper[33867]: I0219 03:34:02.728290 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-socket-dir\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.728392 master-0 kubenswrapper[33867]: I0219 03:34:02.728360 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-metrics-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.729217 master-0 kubenswrapper[33867]: I0219 03:34:02.729135 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-socket-dir\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.732436 master-0 kubenswrapper[33867]: I0219 03:34:02.732275 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-metrics-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.732855 master-0 kubenswrapper[33867]: I0219 03:34:02.732655 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-apiservice-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.734797 master-0 kubenswrapper[33867]: I0219 03:34:02.734744 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-webhook-cert\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.749031 master-0 kubenswrapper[33867]: I0219 03:34:02.748873 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkmg\" (UniqueName: \"kubernetes.io/projected/7f4933e9-5e0e-44ec-b05f-0009050dc8c8-kube-api-access-plkmg\") pod \"lvms-operator-7bbcc8b5bf-xwbz2\" (UID: \"7f4933e9-5e0e-44ec-b05f-0009050dc8c8\") " pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:02.816691 master-0 kubenswrapper[33867]: I0219 03:34:02.816601 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:03.270416 master-0 kubenswrapper[33867]: I0219 03:34:03.270333 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2"] Feb 19 03:34:03.717448 master-0 kubenswrapper[33867]: I0219 03:34:03.717306 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" event={"ID":"7f4933e9-5e0e-44ec-b05f-0009050dc8c8","Type":"ContainerStarted","Data":"26ff936740732f1f1b4af6e89ba46b2597f306245108af7c82507c3daeb93fb5"} Feb 19 03:34:08.834190 master-0 kubenswrapper[33867]: I0219 03:34:08.834112 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" event={"ID":"7f4933e9-5e0e-44ec-b05f-0009050dc8c8","Type":"ContainerStarted","Data":"51bf9478e8ceab4f43fb9d49798131762b76e74f119f9ed17308b64913348444"} Feb 19 03:34:08.835055 master-0 kubenswrapper[33867]: I0219 03:34:08.834356 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:08.867543 master-0 kubenswrapper[33867]: I0219 03:34:08.867426 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" podStartSLOduration=2.254832424 podStartE2EDuration="6.867397383s" podCreationTimestamp="2026-02-19 03:34:02 +0000 UTC" firstStartedPulling="2026-02-19 03:34:03.27913014 +0000 UTC m=+648.575800751" lastFinishedPulling="2026-02-19 03:34:07.891695099 +0000 UTC m=+653.188365710" observedRunningTime="2026-02-19 03:34:08.863163013 +0000 UTC m=+654.159833634" watchObservedRunningTime="2026-02-19 03:34:08.867397383 +0000 UTC m=+654.164067994" Feb 19 03:34:09.845929 master-0 kubenswrapper[33867]: I0219 03:34:09.845857 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2" Feb 19 03:34:14.522651 master-0 kubenswrapper[33867]: I0219 03:34:14.522572 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7"] Feb 19 03:34:14.524351 master-0 kubenswrapper[33867]: I0219 03:34:14.524318 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.534105 master-0 kubenswrapper[33867]: I0219 03:34:14.534011 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vbf8p" Feb 19 03:34:14.548306 master-0 kubenswrapper[33867]: I0219 03:34:14.548213 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7"] Feb 19 03:34:14.675051 master-0 kubenswrapper[33867]: I0219 03:34:14.674951 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqd64\" (UniqueName: \"kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.675611 master-0 kubenswrapper[33867]: I0219 03:34:14.675372 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.675611 master-0 kubenswrapper[33867]: I0219 03:34:14.675599 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.777484 master-0 kubenswrapper[33867]: I0219 03:34:14.777324 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.777484 master-0 kubenswrapper[33867]: I0219 03:34:14.777402 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.777484 master-0 kubenswrapper[33867]: I0219 03:34:14.777486 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqd64\" (UniqueName: \"kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.778188 master-0 kubenswrapper[33867]: I0219 03:34:14.778139 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.778284 master-0 kubenswrapper[33867]: I0219 03:34:14.778199 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.795440 master-0 kubenswrapper[33867]: I0219 03:34:14.795136 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqd64\" (UniqueName: \"kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:14.855693 master-0 kubenswrapper[33867]: I0219 03:34:14.855615 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:15.126424 master-0 kubenswrapper[33867]: I0219 03:34:15.126351 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf"] Feb 19 03:34:15.128865 master-0 kubenswrapper[33867]: I0219 03:34:15.128804 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.137290 master-0 kubenswrapper[33867]: I0219 03:34:15.136991 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf"] Feb 19 03:34:15.285525 master-0 kubenswrapper[33867]: I0219 03:34:15.285445 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.285919 master-0 kubenswrapper[33867]: I0219 03:34:15.285616 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvcn\" (UniqueName: \"kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.285919 master-0 kubenswrapper[33867]: I0219 03:34:15.285666 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.314451 master-0 kubenswrapper[33867]: I0219 03:34:15.314397 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7"] Feb 19 03:34:15.387331 master-0 kubenswrapper[33867]: I0219 03:34:15.387143 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgvcn\" (UniqueName: \"kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.387331 master-0 kubenswrapper[33867]: I0219 03:34:15.387219 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.387331 master-0 kubenswrapper[33867]: I0219 03:34:15.387287 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.388015 master-0 kubenswrapper[33867]: I0219 03:34:15.387958 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.388372 master-0 kubenswrapper[33867]: I0219 03:34:15.388315 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.405046 master-0 kubenswrapper[33867]: I0219 03:34:15.404984 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgvcn\" (UniqueName: \"kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.451681 master-0 kubenswrapper[33867]: I0219 03:34:15.451606 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:15.696176 master-0 kubenswrapper[33867]: I0219 03:34:15.696120 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf"] Feb 19 03:34:15.896362 master-0 kubenswrapper[33867]: I0219 03:34:15.896279 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerStarted","Data":"84231527b5350fb9b6ba193f281eea0767107e0ac30ece0624c42c64bc0c8de6"} Feb 19 03:34:15.896362 master-0 kubenswrapper[33867]: I0219 03:34:15.896348 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerStarted","Data":"e2d72d6c227124ae45e4b02f4a4aee1e3f3a89ccf6fbf2168ed1f177ff7ee9d5"} Feb 19 03:34:15.899190 master-0 kubenswrapper[33867]: I0219 03:34:15.899137 33867 generic.go:334] "Generic (PLEG): container finished" podID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerID="493cce8029e8a881eaec24999c27c90b033353875ad59fc88f8a294e132ca157" exitCode=0 Feb 19 03:34:15.899297 master-0 kubenswrapper[33867]: I0219 03:34:15.899201 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" event={"ID":"5a8fac9e-b364-4c30-80b4-c3f208d864f3","Type":"ContainerDied","Data":"493cce8029e8a881eaec24999c27c90b033353875ad59fc88f8a294e132ca157"} Feb 19 03:34:15.899297 master-0 kubenswrapper[33867]: I0219 03:34:15.899237 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" event={"ID":"5a8fac9e-b364-4c30-80b4-c3f208d864f3","Type":"ContainerStarted","Data":"9179adb369414f00626790f2595aaba67a08a777e30c288d18df8b3542dec846"} Feb 19 03:34:16.119698 master-0 kubenswrapper[33867]: I0219 03:34:16.119378 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42"] Feb 19 03:34:16.121204 master-0 kubenswrapper[33867]: I0219 03:34:16.121164 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.134883 master-0 kubenswrapper[33867]: I0219 03:34:16.133793 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42"] Feb 19 03:34:16.206350 master-0 kubenswrapper[33867]: I0219 03:34:16.206230 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkqs\" (UniqueName: \"kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.206671 master-0 kubenswrapper[33867]: I0219 03:34:16.206610 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.206975 master-0 kubenswrapper[33867]: I0219 03:34:16.206929 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.309770 master-0 kubenswrapper[33867]: I0219 03:34:16.309673 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.310166 master-0 kubenswrapper[33867]: I0219 03:34:16.309867 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkqs\" (UniqueName: \"kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.310277 master-0 kubenswrapper[33867]: I0219 03:34:16.310188 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.310581 master-0 kubenswrapper[33867]: I0219 03:34:16.310503 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.310841 master-0 kubenswrapper[33867]: I0219 03:34:16.310800 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.330097 master-0 kubenswrapper[33867]: I0219 03:34:16.329990 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkqs\" (UniqueName: \"kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.454630 master-0 kubenswrapper[33867]: I0219 03:34:16.454517 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:16.894201 master-0 kubenswrapper[33867]: I0219 03:34:16.894121 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42"] Feb 19 03:34:16.895882 master-0 kubenswrapper[33867]: W0219 03:34:16.895809 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7238c70b_f388_408b_b136_a7f88a1402dd.slice/crio-ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36 WatchSource:0}: Error finding container ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36: Status 404 returned error can't find the container with id ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36 Feb 19 03:34:16.908926 master-0 kubenswrapper[33867]: I0219 03:34:16.908784 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" event={"ID":"7238c70b-f388-408b-b136-a7f88a1402dd","Type":"ContainerStarted","Data":"ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36"} Feb 19 03:34:16.911044 master-0 kubenswrapper[33867]: I0219 03:34:16.910962 33867 generic.go:334] "Generic (PLEG): container finished" podID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerID="84231527b5350fb9b6ba193f281eea0767107e0ac30ece0624c42c64bc0c8de6" exitCode=0 Feb 19 03:34:16.911128 master-0 kubenswrapper[33867]: I0219 03:34:16.911040 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerDied","Data":"84231527b5350fb9b6ba193f281eea0767107e0ac30ece0624c42c64bc0c8de6"} Feb 19 03:34:17.922136 master-0 kubenswrapper[33867]: I0219 03:34:17.922059 33867 generic.go:334] "Generic (PLEG): container finished" podID="7238c70b-f388-408b-b136-a7f88a1402dd" containerID="768a54b97cc0838356d3399f9eec16e81429bb6aca130cd2bb370e0e389edafe" exitCode=0 Feb 19 03:34:17.922136 master-0 kubenswrapper[33867]: I0219 03:34:17.922114 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" event={"ID":"7238c70b-f388-408b-b136-a7f88a1402dd","Type":"ContainerDied","Data":"768a54b97cc0838356d3399f9eec16e81429bb6aca130cd2bb370e0e389edafe"} Feb 19 03:34:19.957879 master-0 kubenswrapper[33867]: I0219 03:34:19.957766 33867 generic.go:334] "Generic (PLEG): container finished" podID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerID="6de0c751b42d01f586e56220bad3640ffa465fba71a00f133884864e0ad5d3b8" exitCode=0 Feb 19 03:34:19.957879 master-0 kubenswrapper[33867]: I0219 03:34:19.957823 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" event={"ID":"5a8fac9e-b364-4c30-80b4-c3f208d864f3","Type":"ContainerDied","Data":"6de0c751b42d01f586e56220bad3640ffa465fba71a00f133884864e0ad5d3b8"} Feb 19 03:34:19.963612 master-0 kubenswrapper[33867]: I0219 03:34:19.963537 33867 generic.go:334] "Generic (PLEG): container finished" podID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerID="62bed01ce106d09d8b3ee75eb12481592c3f6720e85a1e68fa2868ef4ca44441" exitCode=0 Feb 19 03:34:19.963703 master-0 kubenswrapper[33867]: I0219 03:34:19.963606 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerDied","Data":"62bed01ce106d09d8b3ee75eb12481592c3f6720e85a1e68fa2868ef4ca44441"} Feb 19 03:34:20.975417 master-0 kubenswrapper[33867]: I0219 03:34:20.975340 33867 generic.go:334] "Generic (PLEG): container finished" podID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerID="1df32ce6cf01572d48b75ab07475fd31cdef873ea243948d651670264d0cc014" exitCode=0 Feb 19 03:34:20.976089 master-0 kubenswrapper[33867]: I0219 03:34:20.975408 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerDied","Data":"1df32ce6cf01572d48b75ab07475fd31cdef873ea243948d651670264d0cc014"} Feb 19 03:34:20.979545 master-0 kubenswrapper[33867]: I0219 03:34:20.979493 33867 generic.go:334] "Generic (PLEG): container finished" podID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerID="2e91f3058c04c1a494552adfe070e5d2b223e066a423ddb7a0124a924b1881c4" exitCode=0 Feb 19 03:34:20.979653 master-0 kubenswrapper[33867]: I0219 03:34:20.979596 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" event={"ID":"5a8fac9e-b364-4c30-80b4-c3f208d864f3","Type":"ContainerDied","Data":"2e91f3058c04c1a494552adfe070e5d2b223e066a423ddb7a0124a924b1881c4"} Feb 19 03:34:20.982681 master-0 kubenswrapper[33867]: I0219 03:34:20.982656 33867 generic.go:334] "Generic (PLEG): container finished" podID="7238c70b-f388-408b-b136-a7f88a1402dd" containerID="f5c709ea6c546e284bde6eeb4328c6622f3f3c0a0a6b7c67767b13a5beee6cc1" exitCode=0 Feb 19 03:34:20.982807 master-0 kubenswrapper[33867]: I0219 03:34:20.982680 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" event={"ID":"7238c70b-f388-408b-b136-a7f88a1402dd","Type":"ContainerDied","Data":"f5c709ea6c546e284bde6eeb4328c6622f3f3c0a0a6b7c67767b13a5beee6cc1"} Feb 19 03:34:22.012426 master-0 kubenswrapper[33867]: I0219 03:34:22.012226 33867 generic.go:334] "Generic (PLEG): container finished" podID="7238c70b-f388-408b-b136-a7f88a1402dd" containerID="b03c19ec00f49616aa6b8764c9aa3b92638ef861a8096248af51eddaef2bbaad" exitCode=0 Feb 19 03:34:22.012426 master-0 kubenswrapper[33867]: I0219 03:34:22.012393 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" event={"ID":"7238c70b-f388-408b-b136-a7f88a1402dd","Type":"ContainerDied","Data":"b03c19ec00f49616aa6b8764c9aa3b92638ef861a8096248af51eddaef2bbaad"} Feb 19 03:34:22.539948 master-0 kubenswrapper[33867]: I0219 03:34:22.539886 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:22.543647 master-0 kubenswrapper[33867]: I0219 03:34:22.543592 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:22.626762 master-0 kubenswrapper[33867]: I0219 03:34:22.626646 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle\") pod \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " Feb 19 03:34:22.626762 master-0 kubenswrapper[33867]: I0219 03:34:22.626752 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util\") pod \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " Feb 19 03:34:22.627330 master-0 kubenswrapper[33867]: I0219 03:34:22.626839 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgvcn\" (UniqueName: \"kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn\") pod \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " Feb 19 03:34:22.627330 master-0 kubenswrapper[33867]: I0219 03:34:22.627041 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle\") pod \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " Feb 19 03:34:22.627330 master-0 kubenswrapper[33867]: I0219 03:34:22.627084 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util\") pod \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\" (UID: \"6d37cefd-2dd5-4c14-a17e-4a8b34492d99\") " Feb 19 03:34:22.627330 master-0 kubenswrapper[33867]: I0219 03:34:22.627189 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqd64\" (UniqueName: \"kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64\") pod \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\" (UID: \"5a8fac9e-b364-4c30-80b4-c3f208d864f3\") " Feb 19 03:34:22.628222 master-0 kubenswrapper[33867]: I0219 03:34:22.628173 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle" (OuterVolumeSpecName: "bundle") pod "6d37cefd-2dd5-4c14-a17e-4a8b34492d99" (UID: "6d37cefd-2dd5-4c14-a17e-4a8b34492d99"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:22.628370 master-0 kubenswrapper[33867]: I0219 03:34:22.628309 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle" (OuterVolumeSpecName: "bundle") pod "5a8fac9e-b364-4c30-80b4-c3f208d864f3" (UID: "5a8fac9e-b364-4c30-80b4-c3f208d864f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:22.631292 master-0 kubenswrapper[33867]: I0219 03:34:22.631203 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64" (OuterVolumeSpecName: "kube-api-access-zqd64") pod "5a8fac9e-b364-4c30-80b4-c3f208d864f3" (UID: "5a8fac9e-b364-4c30-80b4-c3f208d864f3"). InnerVolumeSpecName "kube-api-access-zqd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:34:22.632312 master-0 kubenswrapper[33867]: I0219 03:34:22.632249 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn" (OuterVolumeSpecName: "kube-api-access-lgvcn") pod "6d37cefd-2dd5-4c14-a17e-4a8b34492d99" (UID: "6d37cefd-2dd5-4c14-a17e-4a8b34492d99"). InnerVolumeSpecName "kube-api-access-lgvcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:34:22.637329 master-0 kubenswrapper[33867]: I0219 03:34:22.637280 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util" (OuterVolumeSpecName: "util") pod "6d37cefd-2dd5-4c14-a17e-4a8b34492d99" (UID: "6d37cefd-2dd5-4c14-a17e-4a8b34492d99"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:22.637697 master-0 kubenswrapper[33867]: I0219 03:34:22.637640 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util" (OuterVolumeSpecName: "util") pod "5a8fac9e-b364-4c30-80b4-c3f208d864f3" (UID: "5a8fac9e-b364-4c30-80b4-c3f208d864f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.728978 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqd64\" (UniqueName: \"kubernetes.io/projected/5a8fac9e-b364-4c30-80b4-c3f208d864f3-kube-api-access-zqd64\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.729042 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.729053 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a8fac9e-b364-4c30-80b4-c3f208d864f3-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.729066 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgvcn\" (UniqueName: \"kubernetes.io/projected/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-kube-api-access-lgvcn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.729077 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:22.729198 master-0 kubenswrapper[33867]: I0219 03:34:22.729086 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d37cefd-2dd5-4c14-a17e-4a8b34492d99-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:23.023800 master-0 kubenswrapper[33867]: I0219 03:34:23.023709 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" event={"ID":"6d37cefd-2dd5-4c14-a17e-4a8b34492d99","Type":"ContainerDied","Data":"e2d72d6c227124ae45e4b02f4a4aee1e3f3a89ccf6fbf2168ed1f177ff7ee9d5"} Feb 19 03:34:23.023800 master-0 kubenswrapper[33867]: I0219 03:34:23.023782 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d72d6c227124ae45e4b02f4a4aee1e3f3a89ccf6fbf2168ed1f177ff7ee9d5" Feb 19 03:34:23.023800 master-0 kubenswrapper[33867]: I0219 03:34:23.023743 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf" Feb 19 03:34:23.026486 master-0 kubenswrapper[33867]: I0219 03:34:23.026426 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" event={"ID":"5a8fac9e-b364-4c30-80b4-c3f208d864f3","Type":"ContainerDied","Data":"9179adb369414f00626790f2595aaba67a08a777e30c288d18df8b3542dec846"} Feb 19 03:34:23.026486 master-0 kubenswrapper[33867]: I0219 03:34:23.026472 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7" Feb 19 03:34:23.026603 master-0 kubenswrapper[33867]: I0219 03:34:23.026492 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9179adb369414f00626790f2595aaba67a08a777e30c288d18df8b3542dec846" Feb 19 03:34:23.376502 master-0 kubenswrapper[33867]: I0219 03:34:23.376414 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:23.443817 master-0 kubenswrapper[33867]: I0219 03:34:23.443731 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkkqs\" (UniqueName: \"kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs\") pod \"7238c70b-f388-408b-b136-a7f88a1402dd\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " Feb 19 03:34:23.444091 master-0 kubenswrapper[33867]: I0219 03:34:23.443860 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle\") pod \"7238c70b-f388-408b-b136-a7f88a1402dd\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " Feb 19 03:34:23.444538 master-0 kubenswrapper[33867]: I0219 03:34:23.444488 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle" (OuterVolumeSpecName: "bundle") pod "7238c70b-f388-408b-b136-a7f88a1402dd" (UID: "7238c70b-f388-408b-b136-a7f88a1402dd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:23.445279 master-0 kubenswrapper[33867]: I0219 03:34:23.444739 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util\") pod \"7238c70b-f388-408b-b136-a7f88a1402dd\" (UID: \"7238c70b-f388-408b-b136-a7f88a1402dd\") " Feb 19 03:34:23.445684 master-0 kubenswrapper[33867]: I0219 03:34:23.445644 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:23.448351 master-0 kubenswrapper[33867]: I0219 03:34:23.447615 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs" (OuterVolumeSpecName: "kube-api-access-vkkqs") pod "7238c70b-f388-408b-b136-a7f88a1402dd" (UID: "7238c70b-f388-408b-b136-a7f88a1402dd"). InnerVolumeSpecName "kube-api-access-vkkqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:34:23.454780 master-0 kubenswrapper[33867]: I0219 03:34:23.454724 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util" (OuterVolumeSpecName: "util") pod "7238c70b-f388-408b-b136-a7f88a1402dd" (UID: "7238c70b-f388-408b-b136-a7f88a1402dd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:23.547657 master-0 kubenswrapper[33867]: I0219 03:34:23.547560 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkkqs\" (UniqueName: \"kubernetes.io/projected/7238c70b-f388-408b-b136-a7f88a1402dd-kube-api-access-vkkqs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:23.547657 master-0 kubenswrapper[33867]: I0219 03:34:23.547627 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7238c70b-f388-408b-b136-a7f88a1402dd-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:24.037136 master-0 kubenswrapper[33867]: I0219 03:34:24.037053 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" event={"ID":"7238c70b-f388-408b-b136-a7f88a1402dd","Type":"ContainerDied","Data":"ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36"} Feb 19 03:34:24.037136 master-0 kubenswrapper[33867]: I0219 03:34:24.037118 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8f4f6e101733b90c0ebc011ef14ae7c6e9b2170462e9f72886a9213c182e36" Feb 19 03:34:24.037136 master-0 kubenswrapper[33867]: I0219 03:34:24.037126 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42" Feb 19 03:34:25.614623 master-0 kubenswrapper[33867]: I0219 03:34:25.614526 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr"] Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615035 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="extract" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615056 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="extract" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615114 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615125 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615142 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615150 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615168 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615178 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615201 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615211 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615246 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615253 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="util" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615280 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615289 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="pull" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: E0219 03:34:25.615307 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="extract" Feb 19 03:34:25.615295 master-0 kubenswrapper[33867]: I0219 03:34:25.615316 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="extract" Feb 19 03:34:25.615913 master-0 kubenswrapper[33867]: E0219 03:34:25.615332 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="extract" Feb 19 03:34:25.615913 master-0 kubenswrapper[33867]: I0219 03:34:25.615341 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="extract" Feb 19 03:34:25.615913 master-0 kubenswrapper[33867]: I0219 03:34:25.615547 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d37cefd-2dd5-4c14-a17e-4a8b34492d99" containerName="extract" Feb 19 03:34:25.615913 master-0 kubenswrapper[33867]: I0219 03:34:25.615569 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7238c70b-f388-408b-b136-a7f88a1402dd" containerName="extract" Feb 19 03:34:25.615913 master-0 kubenswrapper[33867]: I0219 03:34:25.615595 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8fac9e-b364-4c30-80b4-c3f208d864f3" containerName="extract" Feb 19 03:34:25.617239 master-0 kubenswrapper[33867]: I0219 03:34:25.617206 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.624194 master-0 kubenswrapper[33867]: I0219 03:34:25.624138 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vbf8p" Feb 19 03:34:25.640758 master-0 kubenswrapper[33867]: I0219 03:34:25.640682 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr"] Feb 19 03:34:25.687091 master-0 kubenswrapper[33867]: I0219 03:34:25.687042 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.687359 master-0 kubenswrapper[33867]: I0219 03:34:25.687341 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.687458 master-0 kubenswrapper[33867]: I0219 03:34:25.687444 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmtb\" (UniqueName: \"kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.790200 master-0 kubenswrapper[33867]: I0219 03:34:25.790112 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.790200 master-0 kubenswrapper[33867]: I0219 03:34:25.790193 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.790666 master-0 kubenswrapper[33867]: I0219 03:34:25.790269 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmtb\" (UniqueName: \"kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.791072 master-0 kubenswrapper[33867]: I0219 03:34:25.791020 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.791134 master-0 kubenswrapper[33867]: I0219 03:34:25.791081 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.809205 master-0 kubenswrapper[33867]: I0219 03:34:25.809138 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmtb\" (UniqueName: \"kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:25.933797 master-0 kubenswrapper[33867]: I0219 03:34:25.933715 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:26.471291 master-0 kubenswrapper[33867]: I0219 03:34:26.468406 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr"] Feb 19 03:34:27.098565 master-0 kubenswrapper[33867]: I0219 03:34:27.098354 33867 generic.go:334] "Generic (PLEG): container finished" podID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerID="945f9114e1b1532639dc9c053d809bec34dffdb432c11c542bed5eb8b2673511" exitCode=0 Feb 19 03:34:27.098565 master-0 kubenswrapper[33867]: I0219 03:34:27.098433 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" event={"ID":"a4ef22d5-ed98-4266-a018-343b07d0ce8f","Type":"ContainerDied","Data":"945f9114e1b1532639dc9c053d809bec34dffdb432c11c542bed5eb8b2673511"} Feb 19 03:34:27.098565 master-0 kubenswrapper[33867]: I0219 03:34:27.098479 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" event={"ID":"a4ef22d5-ed98-4266-a018-343b07d0ce8f","Type":"ContainerStarted","Data":"b29a915b5edd5444965204c65319ffa88c1fe02be2b299bd699a872f823ba5bf"} Feb 19 03:34:28.577328 master-0 kubenswrapper[33867]: I0219 03:34:28.577095 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9"] Feb 19 03:34:28.578494 master-0 kubenswrapper[33867]: I0219 03:34:28.578468 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.580909 master-0 kubenswrapper[33867]: I0219 03:34:28.580862 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 03:34:28.581483 master-0 kubenswrapper[33867]: I0219 03:34:28.581443 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 03:34:28.608982 master-0 kubenswrapper[33867]: I0219 03:34:28.607950 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9"] Feb 19 03:34:28.659184 master-0 kubenswrapper[33867]: I0219 03:34:28.659101 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd5l\" (UniqueName: \"kubernetes.io/projected/45007001-b442-40a7-9e66-7183cfa8a603-kube-api-access-hhd5l\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.659184 master-0 kubenswrapper[33867]: I0219 03:34:28.659178 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45007001-b442-40a7-9e66-7183cfa8a603-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.760928 master-0 kubenswrapper[33867]: I0219 03:34:28.760842 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhd5l\" (UniqueName: \"kubernetes.io/projected/45007001-b442-40a7-9e66-7183cfa8a603-kube-api-access-hhd5l\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.760928 master-0 kubenswrapper[33867]: I0219 03:34:28.760914 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45007001-b442-40a7-9e66-7183cfa8a603-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.761568 master-0 kubenswrapper[33867]: I0219 03:34:28.761542 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45007001-b442-40a7-9e66-7183cfa8a603-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.781772 master-0 kubenswrapper[33867]: I0219 03:34:28.781696 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhd5l\" (UniqueName: \"kubernetes.io/projected/45007001-b442-40a7-9e66-7183cfa8a603-kube-api-access-hhd5l\") pod \"cert-manager-operator-controller-manager-66c8bdd694-49zg9\" (UID: \"45007001-b442-40a7-9e66-7183cfa8a603\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:28.896753 master-0 kubenswrapper[33867]: I0219 03:34:28.896655 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" Feb 19 03:34:29.139746 master-0 kubenswrapper[33867]: I0219 03:34:29.139662 33867 generic.go:334] "Generic (PLEG): container finished" podID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerID="acdb3e4a141bd2979698ca5e4171dfa1e6ddb53406255028422f6e0920f0fbfd" exitCode=0 Feb 19 03:34:29.139746 master-0 kubenswrapper[33867]: I0219 03:34:29.139740 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" event={"ID":"a4ef22d5-ed98-4266-a018-343b07d0ce8f","Type":"ContainerDied","Data":"acdb3e4a141bd2979698ca5e4171dfa1e6ddb53406255028422f6e0920f0fbfd"} Feb 19 03:34:29.204534 master-0 kubenswrapper[33867]: W0219 03:34:29.204428 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45007001_b442_40a7_9e66_7183cfa8a603.slice/crio-1dade3ad3a1d693054f454480650c455625a837b36d834254a36a473ab9c705b WatchSource:0}: Error finding container 1dade3ad3a1d693054f454480650c455625a837b36d834254a36a473ab9c705b: Status 404 returned error can't find the container with id 1dade3ad3a1d693054f454480650c455625a837b36d834254a36a473ab9c705b Feb 19 03:34:29.217905 master-0 kubenswrapper[33867]: I0219 03:34:29.217833 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9"] Feb 19 03:34:30.153758 master-0 kubenswrapper[33867]: I0219 03:34:30.153624 33867 generic.go:334] "Generic (PLEG): container finished" podID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerID="a5c6c532eb71a821a2c01906c711fdaff84c7f4d5ef71c4001532de693ce471e" exitCode=0 Feb 19 03:34:30.154574 master-0 kubenswrapper[33867]: I0219 03:34:30.153706 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" event={"ID":"a4ef22d5-ed98-4266-a018-343b07d0ce8f","Type":"ContainerDied","Data":"a5c6c532eb71a821a2c01906c711fdaff84c7f4d5ef71c4001532de693ce471e"} Feb 19 03:34:30.155464 master-0 kubenswrapper[33867]: I0219 03:34:30.155409 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" event={"ID":"45007001-b442-40a7-9e66-7183cfa8a603","Type":"ContainerStarted","Data":"1dade3ad3a1d693054f454480650c455625a837b36d834254a36a473ab9c705b"} Feb 19 03:34:31.554569 master-0 kubenswrapper[33867]: I0219 03:34:31.554511 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:31.619900 master-0 kubenswrapper[33867]: I0219 03:34:31.619822 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle\") pod \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " Feb 19 03:34:31.620187 master-0 kubenswrapper[33867]: I0219 03:34:31.620038 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmtb\" (UniqueName: \"kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb\") pod \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " Feb 19 03:34:31.620187 master-0 kubenswrapper[33867]: I0219 03:34:31.620097 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util\") pod \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\" (UID: \"a4ef22d5-ed98-4266-a018-343b07d0ce8f\") " Feb 19 03:34:31.622400 master-0 kubenswrapper[33867]: I0219 03:34:31.622359 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle" (OuterVolumeSpecName: "bundle") pod "a4ef22d5-ed98-4266-a018-343b07d0ce8f" (UID: "a4ef22d5-ed98-4266-a018-343b07d0ce8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:31.623541 master-0 kubenswrapper[33867]: I0219 03:34:31.623469 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb" (OuterVolumeSpecName: "kube-api-access-bpmtb") pod "a4ef22d5-ed98-4266-a018-343b07d0ce8f" (UID: "a4ef22d5-ed98-4266-a018-343b07d0ce8f"). InnerVolumeSpecName "kube-api-access-bpmtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:34:31.635281 master-0 kubenswrapper[33867]: I0219 03:34:31.635118 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util" (OuterVolumeSpecName: "util") pod "a4ef22d5-ed98-4266-a018-343b07d0ce8f" (UID: "a4ef22d5-ed98-4266-a018-343b07d0ce8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:34:31.722850 master-0 kubenswrapper[33867]: I0219 03:34:31.722760 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmtb\" (UniqueName: \"kubernetes.io/projected/a4ef22d5-ed98-4266-a018-343b07d0ce8f-kube-api-access-bpmtb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:31.723237 master-0 kubenswrapper[33867]: I0219 03:34:31.723101 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:31.723237 master-0 kubenswrapper[33867]: I0219 03:34:31.723118 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4ef22d5-ed98-4266-a018-343b07d0ce8f-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:34:32.182960 master-0 kubenswrapper[33867]: I0219 03:34:32.182890 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" event={"ID":"a4ef22d5-ed98-4266-a018-343b07d0ce8f","Type":"ContainerDied","Data":"b29a915b5edd5444965204c65319ffa88c1fe02be2b299bd699a872f823ba5bf"} Feb 19 03:34:32.182960 master-0 kubenswrapper[33867]: I0219 03:34:32.182943 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29a915b5edd5444965204c65319ffa88c1fe02be2b299bd699a872f823ba5bf" Feb 19 03:34:32.183322 master-0 kubenswrapper[33867]: I0219 03:34:32.182990 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr" Feb 19 03:34:34.200025 master-0 kubenswrapper[33867]: I0219 03:34:34.199939 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" event={"ID":"45007001-b442-40a7-9e66-7183cfa8a603","Type":"ContainerStarted","Data":"19172128dc112dd9b2fe95eff4effb3444b01cd11e383c50b3753aa484486c77"} Feb 19 03:34:34.230387 master-0 kubenswrapper[33867]: I0219 03:34:34.230277 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-49zg9" podStartSLOduration=2.307719809 podStartE2EDuration="6.230232898s" podCreationTimestamp="2026-02-19 03:34:28 +0000 UTC" firstStartedPulling="2026-02-19 03:34:29.21170012 +0000 UTC m=+674.508370731" lastFinishedPulling="2026-02-19 03:34:33.134213209 +0000 UTC m=+678.430883820" observedRunningTime="2026-02-19 03:34:34.224838225 +0000 UTC m=+679.521508836" watchObservedRunningTime="2026-02-19 03:34:34.230232898 +0000 UTC m=+679.526903509" Feb 19 03:34:38.020024 master-0 kubenswrapper[33867]: I0219 03:34:38.019931 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-mcjb2"] Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: E0219 03:34:38.020507 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="extract" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: I0219 03:34:38.020529 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="extract" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: E0219 03:34:38.020567 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="pull" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: I0219 03:34:38.020576 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="pull" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: E0219 03:34:38.020595 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="util" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: I0219 03:34:38.020606 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="util" Feb 19 03:34:38.021005 master-0 kubenswrapper[33867]: I0219 03:34:38.020778 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ef22d5-ed98-4266-a018-343b07d0ce8f" containerName="extract" Feb 19 03:34:38.021919 master-0 kubenswrapper[33867]: I0219 03:34:38.021895 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.023964 master-0 kubenswrapper[33867]: I0219 03:34:38.023930 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 03:34:38.024336 master-0 kubenswrapper[33867]: I0219 03:34:38.024314 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 03:34:38.038756 master-0 kubenswrapper[33867]: I0219 03:34:38.038667 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-mcjb2"] Feb 19 03:34:38.049599 master-0 kubenswrapper[33867]: I0219 03:34:38.049507 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xn7\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-kube-api-access-s6xn7\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.049912 master-0 kubenswrapper[33867]: I0219 03:34:38.049675 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.151541 master-0 kubenswrapper[33867]: I0219 03:34:38.151424 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xn7\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-kube-api-access-s6xn7\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.151891 master-0 kubenswrapper[33867]: I0219 03:34:38.151583 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.172223 master-0 kubenswrapper[33867]: I0219 03:34:38.172144 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.172540 master-0 kubenswrapper[33867]: I0219 03:34:38.172280 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xn7\" (UniqueName: \"kubernetes.io/projected/f5f24426-ab21-4736-9f97-71ec47becd17-kube-api-access-s6xn7\") pod \"cert-manager-webhook-6888856db4-mcjb2\" (UID: \"f5f24426-ab21-4736-9f97-71ec47becd17\") " pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.352400 master-0 kubenswrapper[33867]: I0219 03:34:38.352183 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:38.870387 master-0 kubenswrapper[33867]: I0219 03:34:38.870290 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-mcjb2"] Feb 19 03:34:38.874724 master-0 kubenswrapper[33867]: W0219 03:34:38.874636 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5f24426_ab21_4736_9f97_71ec47becd17.slice/crio-6d7e7e562132bf435291091d9f67f0270fe1d6a6a5778bd8235680b5a41f7951 WatchSource:0}: Error finding container 6d7e7e562132bf435291091d9f67f0270fe1d6a6a5778bd8235680b5a41f7951: Status 404 returned error can't find the container with id 6d7e7e562132bf435291091d9f67f0270fe1d6a6a5778bd8235680b5a41f7951 Feb 19 03:34:38.903727 master-0 kubenswrapper[33867]: I0219 03:34:38.903652 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-tsxfz"] Feb 19 03:34:38.905215 master-0 kubenswrapper[33867]: I0219 03:34:38.905128 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:38.915486 master-0 kubenswrapper[33867]: I0219 03:34:38.915424 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-tsxfz"] Feb 19 03:34:39.073902 master-0 kubenswrapper[33867]: I0219 03:34:39.073788 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.074658 master-0 kubenswrapper[33867]: I0219 03:34:39.074184 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hz2\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-kube-api-access-h5hz2\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.177375 master-0 kubenswrapper[33867]: I0219 03:34:39.177303 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.177751 master-0 kubenswrapper[33867]: I0219 03:34:39.177549 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hz2\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-kube-api-access-h5hz2\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.199612 master-0 kubenswrapper[33867]: I0219 03:34:39.199553 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hz2\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-kube-api-access-h5hz2\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.199945 master-0 kubenswrapper[33867]: I0219 03:34:39.199657 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50fab54f-3c0d-40ac-a0e3-c6a413e099de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-tsxfz\" (UID: \"50fab54f-3c0d-40ac-a0e3-c6a413e099de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.232726 master-0 kubenswrapper[33867]: I0219 03:34:39.232648 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" Feb 19 03:34:39.251837 master-0 kubenswrapper[33867]: I0219 03:34:39.251767 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" event={"ID":"f5f24426-ab21-4736-9f97-71ec47becd17","Type":"ContainerStarted","Data":"6d7e7e562132bf435291091d9f67f0270fe1d6a6a5778bd8235680b5a41f7951"} Feb 19 03:34:39.724391 master-0 kubenswrapper[33867]: I0219 03:34:39.724026 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-tsxfz"] Feb 19 03:34:39.731701 master-0 kubenswrapper[33867]: W0219 03:34:39.731655 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50fab54f_3c0d_40ac_a0e3_c6a413e099de.slice/crio-ca8e6600459d9168f30121f739b401744dddfc13199d830ca51c099c2f629d74 WatchSource:0}: Error finding container ca8e6600459d9168f30121f739b401744dddfc13199d830ca51c099c2f629d74: Status 404 returned error can't find the container with id ca8e6600459d9168f30121f739b401744dddfc13199d830ca51c099c2f629d74 Feb 19 03:34:40.264163 master-0 kubenswrapper[33867]: I0219 03:34:40.264049 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" event={"ID":"50fab54f-3c0d-40ac-a0e3-c6a413e099de","Type":"ContainerStarted","Data":"ca8e6600459d9168f30121f739b401744dddfc13199d830ca51c099c2f629d74"} Feb 19 03:34:40.805116 master-0 kubenswrapper[33867]: I0219 03:34:40.805002 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s4btw"] Feb 19 03:34:40.806366 master-0 kubenswrapper[33867]: I0219 03:34:40.806335 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" Feb 19 03:34:40.813683 master-0 kubenswrapper[33867]: I0219 03:34:40.813619 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 03:34:40.813975 master-0 kubenswrapper[33867]: I0219 03:34:40.813802 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 03:34:40.818359 master-0 kubenswrapper[33867]: I0219 03:34:40.818232 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s4btw"] Feb 19 03:34:40.916780 master-0 kubenswrapper[33867]: I0219 03:34:40.916707 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46xf\" (UniqueName: \"kubernetes.io/projected/81d21677-453c-479c-a6c2-b7663fd32b72-kube-api-access-w46xf\") pod \"nmstate-operator-694c9596b7-s4btw\" (UID: \"81d21677-453c-479c-a6c2-b7663fd32b72\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" Feb 19 03:34:41.020788 master-0 kubenswrapper[33867]: I0219 03:34:41.020720 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46xf\" (UniqueName: \"kubernetes.io/projected/81d21677-453c-479c-a6c2-b7663fd32b72-kube-api-access-w46xf\") pod \"nmstate-operator-694c9596b7-s4btw\" (UID: \"81d21677-453c-479c-a6c2-b7663fd32b72\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" Feb 19 03:34:41.048664 master-0 kubenswrapper[33867]: I0219 03:34:41.048189 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46xf\" (UniqueName: \"kubernetes.io/projected/81d21677-453c-479c-a6c2-b7663fd32b72-kube-api-access-w46xf\") pod \"nmstate-operator-694c9596b7-s4btw\" (UID: \"81d21677-453c-479c-a6c2-b7663fd32b72\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" Feb 19 03:34:41.191889 master-0 kubenswrapper[33867]: I0219 03:34:41.191814 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" Feb 19 03:34:41.669277 master-0 kubenswrapper[33867]: I0219 03:34:41.669167 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s4btw"] Feb 19 03:34:41.678322 master-0 kubenswrapper[33867]: W0219 03:34:41.678223 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d21677_453c_479c_a6c2_b7663fd32b72.slice/crio-e88c2cc4e846c2fe422ab5edf2b15b2d766d2e764bf22bd16e021fdd1dd34bc6 WatchSource:0}: Error finding container e88c2cc4e846c2fe422ab5edf2b15b2d766d2e764bf22bd16e021fdd1dd34bc6: Status 404 returned error can't find the container with id e88c2cc4e846c2fe422ab5edf2b15b2d766d2e764bf22bd16e021fdd1dd34bc6 Feb 19 03:34:42.328937 master-0 kubenswrapper[33867]: I0219 03:34:42.328506 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" event={"ID":"81d21677-453c-479c-a6c2-b7663fd32b72","Type":"ContainerStarted","Data":"e88c2cc4e846c2fe422ab5edf2b15b2d766d2e764bf22bd16e021fdd1dd34bc6"} Feb 19 03:34:47.043340 master-0 kubenswrapper[33867]: I0219 03:34:47.043195 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk"] Feb 19 03:34:47.045531 master-0 kubenswrapper[33867]: I0219 03:34:47.044666 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.052014 master-0 kubenswrapper[33867]: I0219 03:34:47.051919 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 03:34:47.056286 master-0 kubenswrapper[33867]: I0219 03:34:47.052438 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 03:34:47.056286 master-0 kubenswrapper[33867]: I0219 03:34:47.052700 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 03:34:47.056286 master-0 kubenswrapper[33867]: I0219 03:34:47.052871 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 03:34:47.118600 master-0 kubenswrapper[33867]: I0219 03:34:47.118413 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk"] Feb 19 03:34:47.230147 master-0 kubenswrapper[33867]: I0219 03:34:47.230042 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-apiservice-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.230496 master-0 kubenswrapper[33867]: I0219 03:34:47.230336 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4dx5\" (UniqueName: \"kubernetes.io/projected/48dfa1c5-695c-45aa-aca5-f01672f08790-kube-api-access-w4dx5\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.230496 master-0 kubenswrapper[33867]: I0219 03:34:47.230428 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-webhook-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.332133 master-0 kubenswrapper[33867]: I0219 03:34:47.332054 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-webhook-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.332400 master-0 kubenswrapper[33867]: I0219 03:34:47.332218 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-apiservice-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.332400 master-0 kubenswrapper[33867]: I0219 03:34:47.332283 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4dx5\" (UniqueName: \"kubernetes.io/projected/48dfa1c5-695c-45aa-aca5-f01672f08790-kube-api-access-w4dx5\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.356292 master-0 kubenswrapper[33867]: I0219 03:34:47.356195 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-apiservice-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.364774 master-0 kubenswrapper[33867]: I0219 03:34:47.364132 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4dx5\" (UniqueName: \"kubernetes.io/projected/48dfa1c5-695c-45aa-aca5-f01672f08790-kube-api-access-w4dx5\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.369386 master-0 kubenswrapper[33867]: I0219 03:34:47.366179 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48dfa1c5-695c-45aa-aca5-f01672f08790-webhook-cert\") pod \"metallb-operator-controller-manager-57d69997cd-bxnmk\" (UID: \"48dfa1c5-695c-45aa-aca5-f01672f08790\") " pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.422963 master-0 kubenswrapper[33867]: I0219 03:34:47.422869 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:47.434875 master-0 kubenswrapper[33867]: I0219 03:34:47.434792 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" event={"ID":"50fab54f-3c0d-40ac-a0e3-c6a413e099de","Type":"ContainerStarted","Data":"5925478fa38085d0ef2fe89ef0fe15571952af3a00f786727123973e37039d5b"} Feb 19 03:34:47.443462 master-0 kubenswrapper[33867]: I0219 03:34:47.443399 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" event={"ID":"81d21677-453c-479c-a6c2-b7663fd32b72","Type":"ContainerStarted","Data":"094ce60a6a9439c9332532cf9c029f493f71c96e567d0a29cbb8478d27ffbcff"} Feb 19 03:34:47.507762 master-0 kubenswrapper[33867]: I0219 03:34:47.503594 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-s4btw" podStartSLOduration=2.319357524 podStartE2EDuration="7.503572004s" podCreationTimestamp="2026-02-19 03:34:40 +0000 UTC" firstStartedPulling="2026-02-19 03:34:41.687604418 +0000 UTC m=+686.984275029" lastFinishedPulling="2026-02-19 03:34:46.871818898 +0000 UTC m=+692.168489509" observedRunningTime="2026-02-19 03:34:47.500900719 +0000 UTC m=+692.797571330" watchObservedRunningTime="2026-02-19 03:34:47.503572004 +0000 UTC m=+692.800242615" Feb 19 03:34:47.526316 master-0 kubenswrapper[33867]: I0219 03:34:47.525894 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-tsxfz" podStartSLOduration=2.390377602 podStartE2EDuration="9.525861204s" podCreationTimestamp="2026-02-19 03:34:38 +0000 UTC" firstStartedPulling="2026-02-19 03:34:39.734974487 +0000 UTC m=+685.031645098" lastFinishedPulling="2026-02-19 03:34:46.870458089 +0000 UTC m=+692.167128700" observedRunningTime="2026-02-19 03:34:47.462918805 +0000 UTC m=+692.759589416" watchObservedRunningTime="2026-02-19 03:34:47.525861204 +0000 UTC m=+692.822531835" Feb 19 03:34:47.701283 master-0 kubenswrapper[33867]: I0219 03:34:47.701145 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc"] Feb 19 03:34:47.703235 master-0 kubenswrapper[33867]: I0219 03:34:47.703196 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.714359 master-0 kubenswrapper[33867]: I0219 03:34:47.707840 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 03:34:47.714359 master-0 kubenswrapper[33867]: I0219 03:34:47.708055 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 03:34:47.731987 master-0 kubenswrapper[33867]: I0219 03:34:47.731116 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc"] Feb 19 03:34:47.861283 master-0 kubenswrapper[33867]: I0219 03:34:47.857561 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9fd\" (UniqueName: \"kubernetes.io/projected/becd4fad-b917-478c-83bf-0b5d0a6770f3-kube-api-access-mw9fd\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.861283 master-0 kubenswrapper[33867]: I0219 03:34:47.857652 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-webhook-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.861283 master-0 kubenswrapper[33867]: I0219 03:34:47.857695 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-apiservice-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.960379 master-0 kubenswrapper[33867]: I0219 03:34:47.959735 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9fd\" (UniqueName: \"kubernetes.io/projected/becd4fad-b917-478c-83bf-0b5d0a6770f3-kube-api-access-mw9fd\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.960379 master-0 kubenswrapper[33867]: I0219 03:34:47.959840 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-webhook-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.960379 master-0 kubenswrapper[33867]: I0219 03:34:47.959879 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-apiservice-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.990283 master-0 kubenswrapper[33867]: I0219 03:34:47.981336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-apiservice-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:47.994068 master-0 kubenswrapper[33867]: I0219 03:34:47.991247 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/becd4fad-b917-478c-83bf-0b5d0a6770f3-webhook-cert\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:48.001350 master-0 kubenswrapper[33867]: I0219 03:34:47.998446 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9fd\" (UniqueName: \"kubernetes.io/projected/becd4fad-b917-478c-83bf-0b5d0a6770f3-kube-api-access-mw9fd\") pod \"metallb-operator-webhook-server-667b5d6768-wjdrc\" (UID: \"becd4fad-b917-478c-83bf-0b5d0a6770f3\") " pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:48.048287 master-0 kubenswrapper[33867]: I0219 03:34:48.047535 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:48.098780 master-0 kubenswrapper[33867]: I0219 03:34:48.097832 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk"] Feb 19 03:34:48.115935 master-0 kubenswrapper[33867]: W0219 03:34:48.112031 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48dfa1c5_695c_45aa_aca5_f01672f08790.slice/crio-c2850b0255a4f6789a4cc6f76d43566d32ca5fd774b02773accd3a40ce601807 WatchSource:0}: Error finding container c2850b0255a4f6789a4cc6f76d43566d32ca5fd774b02773accd3a40ce601807: Status 404 returned error can't find the container with id c2850b0255a4f6789a4cc6f76d43566d32ca5fd774b02773accd3a40ce601807 Feb 19 03:34:48.464498 master-0 kubenswrapper[33867]: I0219 03:34:48.462009 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" event={"ID":"48dfa1c5-695c-45aa-aca5-f01672f08790","Type":"ContainerStarted","Data":"c2850b0255a4f6789a4cc6f76d43566d32ca5fd774b02773accd3a40ce601807"} Feb 19 03:34:48.464498 master-0 kubenswrapper[33867]: I0219 03:34:48.463720 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" event={"ID":"f5f24426-ab21-4736-9f97-71ec47becd17","Type":"ContainerStarted","Data":"843a4047ce0bbcb9ea36952ad80a23bd50f4482e92957744b2d76f2c9d6769cc"} Feb 19 03:34:48.501992 master-0 kubenswrapper[33867]: I0219 03:34:48.501768 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" podStartSLOduration=3.475799186 podStartE2EDuration="11.501733347s" podCreationTimestamp="2026-02-19 03:34:37 +0000 UTC" firstStartedPulling="2026-02-19 03:34:38.877335766 +0000 UTC m=+684.174006377" lastFinishedPulling="2026-02-19 03:34:46.903269927 +0000 UTC m=+692.199940538" observedRunningTime="2026-02-19 03:34:48.492095495 +0000 UTC m=+693.788766106" watchObservedRunningTime="2026-02-19 03:34:48.501733347 +0000 UTC m=+693.798403948" Feb 19 03:34:48.586273 master-0 kubenswrapper[33867]: W0219 03:34:48.586181 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecd4fad_b917_478c_83bf_0b5d0a6770f3.slice/crio-b413e148b6eb6e8bc8124992c9a8d873c42d22255ec426cf16e4dd162fee23b6 WatchSource:0}: Error finding container b413e148b6eb6e8bc8124992c9a8d873c42d22255ec426cf16e4dd162fee23b6: Status 404 returned error can't find the container with id b413e148b6eb6e8bc8124992c9a8d873c42d22255ec426cf16e4dd162fee23b6 Feb 19 03:34:48.602412 master-0 kubenswrapper[33867]: I0219 03:34:48.602033 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc"] Feb 19 03:34:49.477376 master-0 kubenswrapper[33867]: I0219 03:34:49.476276 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" event={"ID":"becd4fad-b917-478c-83bf-0b5d0a6770f3","Type":"ContainerStarted","Data":"b413e148b6eb6e8bc8124992c9a8d873c42d22255ec426cf16e4dd162fee23b6"} Feb 19 03:34:49.477376 master-0 kubenswrapper[33867]: I0219 03:34:49.476452 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:52.562245 master-0 kubenswrapper[33867]: I0219 03:34:52.562153 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" event={"ID":"48dfa1c5-695c-45aa-aca5-f01672f08790","Type":"ContainerStarted","Data":"7039cf6fa5b97bcc98db1b56235f48a037f0afa62fbcbe9a56e313b8db96abe0"} Feb 19 03:34:52.563147 master-0 kubenswrapper[33867]: I0219 03:34:52.562360 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:34:52.602601 master-0 kubenswrapper[33867]: I0219 03:34:52.602488 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" podStartSLOduration=2.9971595300000002 podStartE2EDuration="6.602441012s" podCreationTimestamp="2026-02-19 03:34:46 +0000 UTC" firstStartedPulling="2026-02-19 03:34:48.114449891 +0000 UTC m=+693.411120502" lastFinishedPulling="2026-02-19 03:34:51.719731373 +0000 UTC m=+697.016401984" observedRunningTime="2026-02-19 03:34:52.589028583 +0000 UTC m=+697.885699194" watchObservedRunningTime="2026-02-19 03:34:52.602441012 +0000 UTC m=+697.899111623" Feb 19 03:34:53.360820 master-0 kubenswrapper[33867]: I0219 03:34:53.360721 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-mcjb2" Feb 19 03:34:54.610291 master-0 kubenswrapper[33867]: I0219 03:34:54.604449 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" event={"ID":"becd4fad-b917-478c-83bf-0b5d0a6770f3","Type":"ContainerStarted","Data":"7756b2c3aa5ae22f65c81909415a925c6936475ee561eec1dcc7afff04ba6ff7"} Feb 19 03:34:54.610291 master-0 kubenswrapper[33867]: I0219 03:34:54.605954 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:34:54.684285 master-0 kubenswrapper[33867]: I0219 03:34:54.683613 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" podStartSLOduration=2.688552803 podStartE2EDuration="7.683582315s" podCreationTimestamp="2026-02-19 03:34:47 +0000 UTC" firstStartedPulling="2026-02-19 03:34:48.592006959 +0000 UTC m=+693.888677560" lastFinishedPulling="2026-02-19 03:34:53.587036461 +0000 UTC m=+698.883707072" observedRunningTime="2026-02-19 03:34:54.674751265 +0000 UTC m=+699.971421876" watchObservedRunningTime="2026-02-19 03:34:54.683582315 +0000 UTC m=+699.980252926" Feb 19 03:34:55.167340 master-0 kubenswrapper[33867]: I0219 03:34:55.166738 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz"] Feb 19 03:34:55.171278 master-0 kubenswrapper[33867]: I0219 03:34:55.168334 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" Feb 19 03:34:55.182286 master-0 kubenswrapper[33867]: I0219 03:34:55.182194 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 03:34:55.182686 master-0 kubenswrapper[33867]: I0219 03:34:55.182373 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 03:34:55.232300 master-0 kubenswrapper[33867]: I0219 03:34:55.231394 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz"] Feb 19 03:34:55.257292 master-0 kubenswrapper[33867]: I0219 03:34:55.256553 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzh9t\" (UniqueName: \"kubernetes.io/projected/bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9-kube-api-access-xzh9t\") pod \"obo-prometheus-operator-68bc856cb9-8lsbz\" (UID: \"bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" Feb 19 03:34:55.361317 master-0 kubenswrapper[33867]: I0219 03:34:55.360737 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzh9t\" (UniqueName: \"kubernetes.io/projected/bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9-kube-api-access-xzh9t\") pod \"obo-prometheus-operator-68bc856cb9-8lsbz\" (UID: \"bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" Feb 19 03:34:55.413290 master-0 kubenswrapper[33867]: I0219 03:34:55.409538 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzh9t\" (UniqueName: \"kubernetes.io/projected/bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9-kube-api-access-xzh9t\") pod \"obo-prometheus-operator-68bc856cb9-8lsbz\" (UID: \"bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" Feb 19 03:34:55.419158 master-0 kubenswrapper[33867]: I0219 03:34:55.416798 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq"] Feb 19 03:34:55.419158 master-0 kubenswrapper[33867]: I0219 03:34:55.418409 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.427298 master-0 kubenswrapper[33867]: I0219 03:34:55.425681 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 03:34:55.432439 master-0 kubenswrapper[33867]: I0219 03:34:55.430538 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg"] Feb 19 03:34:55.432439 master-0 kubenswrapper[33867]: I0219 03:34:55.431894 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.460291 master-0 kubenswrapper[33867]: I0219 03:34:55.459307 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq"] Feb 19 03:34:55.491292 master-0 kubenswrapper[33867]: I0219 03:34:55.490942 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg"] Feb 19 03:34:55.526302 master-0 kubenswrapper[33867]: I0219 03:34:55.509684 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" Feb 19 03:34:55.567970 master-0 kubenswrapper[33867]: I0219 03:34:55.565568 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.567970 master-0 kubenswrapper[33867]: I0219 03:34:55.565851 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.567970 master-0 kubenswrapper[33867]: I0219 03:34:55.565894 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.567970 master-0 kubenswrapper[33867]: I0219 03:34:55.565920 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.668288 master-0 kubenswrapper[33867]: I0219 03:34:55.667812 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.668288 master-0 kubenswrapper[33867]: I0219 03:34:55.667910 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.668288 master-0 kubenswrapper[33867]: I0219 03:34:55.667941 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.668288 master-0 kubenswrapper[33867]: I0219 03:34:55.667978 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.684051 master-0 kubenswrapper[33867]: I0219 03:34:55.680975 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.684051 master-0 kubenswrapper[33867]: I0219 03:34:55.681156 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32f213a4-b7ac-460b-a749-f03007e0b532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-brtsg\" (UID: \"32f213a4-b7ac-460b-a749-f03007e0b532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.698670 master-0 kubenswrapper[33867]: I0219 03:34:55.685078 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.763290 master-0 kubenswrapper[33867]: I0219 03:34:55.762348 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1612cd6b-b986-4763-bda3-16c58fb2ce66-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559b85975-mf9mq\" (UID: \"1612cd6b-b986-4763-bda3-16c58fb2ce66\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.853839 master-0 kubenswrapper[33867]: I0219 03:34:55.831211 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" Feb 19 03:34:55.857286 master-0 kubenswrapper[33867]: I0219 03:34:55.855405 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pkxns"] Feb 19 03:34:55.869285 master-0 kubenswrapper[33867]: I0219 03:34:55.857990 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:55.869285 master-0 kubenswrapper[33867]: I0219 03:34:55.863784 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 03:34:55.895292 master-0 kubenswrapper[33867]: I0219 03:34:55.881003 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pkxns"] Feb 19 03:34:55.937279 master-0 kubenswrapper[33867]: I0219 03:34:55.936897 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" Feb 19 03:34:55.985423 master-0 kubenswrapper[33867]: I0219 03:34:55.985329 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgk9f\" (UniqueName: \"kubernetes.io/projected/b9feaa19-6cd5-457b-ad70-36fe90ac8419-kube-api-access-jgk9f\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:55.985765 master-0 kubenswrapper[33867]: I0219 03:34:55.985493 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9feaa19-6cd5-457b-ad70-36fe90ac8419-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.021222 master-0 kubenswrapper[33867]: I0219 03:34:56.021117 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz"] Feb 19 03:34:56.089306 master-0 kubenswrapper[33867]: I0219 03:34:56.082415 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6q7n"] Feb 19 03:34:56.089596 master-0 kubenswrapper[33867]: I0219 03:34:56.087033 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgk9f\" (UniqueName: \"kubernetes.io/projected/b9feaa19-6cd5-457b-ad70-36fe90ac8419-kube-api-access-jgk9f\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.090321 master-0 kubenswrapper[33867]: I0219 03:34:56.090186 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9feaa19-6cd5-457b-ad70-36fe90ac8419-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.091892 master-0 kubenswrapper[33867]: I0219 03:34:56.091862 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.102659 master-0 kubenswrapper[33867]: I0219 03:34:56.101871 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9feaa19-6cd5-457b-ad70-36fe90ac8419-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.129317 master-0 kubenswrapper[33867]: I0219 03:34:56.124397 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgk9f\" (UniqueName: \"kubernetes.io/projected/b9feaa19-6cd5-457b-ad70-36fe90ac8419-kube-api-access-jgk9f\") pod \"observability-operator-59bdc8b94-pkxns\" (UID: \"b9feaa19-6cd5-457b-ad70-36fe90ac8419\") " pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.146366 master-0 kubenswrapper[33867]: I0219 03:34:56.146281 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6q7n"] Feb 19 03:34:56.195005 master-0 kubenswrapper[33867]: I0219 03:34:56.194813 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmh5\" (UniqueName: \"kubernetes.io/projected/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-kube-api-access-dlmh5\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.195005 master-0 kubenswrapper[33867]: I0219 03:34:56.194885 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.240311 master-0 kubenswrapper[33867]: I0219 03:34:56.232399 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:34:56.312295 master-0 kubenswrapper[33867]: I0219 03:34:56.310314 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmh5\" (UniqueName: \"kubernetes.io/projected/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-kube-api-access-dlmh5\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.312295 master-0 kubenswrapper[33867]: I0219 03:34:56.310394 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.312295 master-0 kubenswrapper[33867]: I0219 03:34:56.311649 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.337290 master-0 kubenswrapper[33867]: I0219 03:34:56.337215 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmh5\" (UniqueName: \"kubernetes.io/projected/bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3-kube-api-access-dlmh5\") pod \"perses-operator-5bf474d74f-l6q7n\" (UID: \"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3\") " pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.472297 master-0 kubenswrapper[33867]: I0219 03:34:56.472090 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:34:56.535282 master-0 kubenswrapper[33867]: I0219 03:34:56.535174 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq"] Feb 19 03:34:56.549582 master-0 kubenswrapper[33867]: W0219 03:34:56.549507 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1612cd6b_b986_4763_bda3_16c58fb2ce66.slice/crio-48977efe84f3aac1903fe03f4539dd441b1a229aa7b0681e2b48a2cfc1e74894 WatchSource:0}: Error finding container 48977efe84f3aac1903fe03f4539dd441b1a229aa7b0681e2b48a2cfc1e74894: Status 404 returned error can't find the container with id 48977efe84f3aac1903fe03f4539dd441b1a229aa7b0681e2b48a2cfc1e74894 Feb 19 03:34:56.647893 master-0 kubenswrapper[33867]: I0219 03:34:56.644194 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-zsfln"] Feb 19 03:34:56.647893 master-0 kubenswrapper[33867]: I0219 03:34:56.645851 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:56.658071 master-0 kubenswrapper[33867]: I0219 03:34:56.657077 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zsfln"] Feb 19 03:34:56.669526 master-0 kubenswrapper[33867]: I0219 03:34:56.667209 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg"] Feb 19 03:34:56.710977 master-0 kubenswrapper[33867]: I0219 03:34:56.710878 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" event={"ID":"1612cd6b-b986-4763-bda3-16c58fb2ce66","Type":"ContainerStarted","Data":"48977efe84f3aac1903fe03f4539dd441b1a229aa7b0681e2b48a2cfc1e74894"} Feb 19 03:34:56.713622 master-0 kubenswrapper[33867]: I0219 03:34:56.713201 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" event={"ID":"bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9","Type":"ContainerStarted","Data":"aa6ab9b43a303d48a8ec9fa4372db605621645d830a90f51fcd6b2e829b5d276"} Feb 19 03:34:56.818378 master-0 kubenswrapper[33867]: I0219 03:34:56.818196 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pkxns"] Feb 19 03:34:56.825054 master-0 kubenswrapper[33867]: W0219 03:34:56.824911 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9feaa19_6cd5_457b_ad70_36fe90ac8419.slice/crio-ca1e5ef63f3b851ea875723c466e8111d54487f65e1835043946709da5505b37 WatchSource:0}: Error finding container ca1e5ef63f3b851ea875723c466e8111d54487f65e1835043946709da5505b37: Status 404 returned error can't find the container with id ca1e5ef63f3b851ea875723c466e8111d54487f65e1835043946709da5505b37 Feb 19 03:34:56.830703 master-0 kubenswrapper[33867]: I0219 03:34:56.830654 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-bound-sa-token\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:56.831545 master-0 kubenswrapper[33867]: I0219 03:34:56.831529 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4kw\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-kube-api-access-xb4kw\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:56.968292 master-0 kubenswrapper[33867]: I0219 03:34:56.963494 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-bound-sa-token\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:56.968292 master-0 kubenswrapper[33867]: I0219 03:34:56.963558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4kw\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-kube-api-access-xb4kw\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:57.007509 master-0 kubenswrapper[33867]: I0219 03:34:57.001336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4kw\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-kube-api-access-xb4kw\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:57.023280 master-0 kubenswrapper[33867]: I0219 03:34:57.023194 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c694d5-497d-4aca-8e88-9ee5c9a7bcce-bound-sa-token\") pod \"cert-manager-545d4d4674-zsfln\" (UID: \"37c694d5-497d-4aca-8e88-9ee5c9a7bcce\") " pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:57.027284 master-0 kubenswrapper[33867]: I0219 03:34:57.024115 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zsfln" Feb 19 03:34:57.078025 master-0 kubenswrapper[33867]: I0219 03:34:57.077761 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6q7n"] Feb 19 03:34:57.091079 master-0 kubenswrapper[33867]: W0219 03:34:57.090989 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1e7bd3_1330_47ed_9c89_ad6ff6ffbeb3.slice/crio-03178ca248a74362f16a15a99e6b9e32b26aff9829278b3eeaaf61f24af1b18f WatchSource:0}: Error finding container 03178ca248a74362f16a15a99e6b9e32b26aff9829278b3eeaaf61f24af1b18f: Status 404 returned error can't find the container with id 03178ca248a74362f16a15a99e6b9e32b26aff9829278b3eeaaf61f24af1b18f Feb 19 03:34:57.603320 master-0 kubenswrapper[33867]: I0219 03:34:57.602033 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zsfln"] Feb 19 03:34:57.735307 master-0 kubenswrapper[33867]: I0219 03:34:57.732150 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" event={"ID":"b9feaa19-6cd5-457b-ad70-36fe90ac8419","Type":"ContainerStarted","Data":"ca1e5ef63f3b851ea875723c466e8111d54487f65e1835043946709da5505b37"} Feb 19 03:34:57.757690 master-0 kubenswrapper[33867]: I0219 03:34:57.756749 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" event={"ID":"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3","Type":"ContainerStarted","Data":"03178ca248a74362f16a15a99e6b9e32b26aff9829278b3eeaaf61f24af1b18f"} Feb 19 03:34:57.771305 master-0 kubenswrapper[33867]: I0219 03:34:57.766661 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" event={"ID":"32f213a4-b7ac-460b-a749-f03007e0b532","Type":"ContainerStarted","Data":"e1388543c1bc1e6a085e1fb4e60225a6d95f392ae3fdcfff821413a9d8c4f4b4"} Feb 19 03:34:57.809445 master-0 kubenswrapper[33867]: I0219 03:34:57.806701 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zsfln" event={"ID":"37c694d5-497d-4aca-8e88-9ee5c9a7bcce","Type":"ContainerStarted","Data":"3c72470f2007999963577edc81b1505286ee381ab4eb6458e6f09de55beb7e87"} Feb 19 03:34:58.931321 master-0 kubenswrapper[33867]: I0219 03:34:58.931203 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zsfln" event={"ID":"37c694d5-497d-4aca-8e88-9ee5c9a7bcce","Type":"ContainerStarted","Data":"75a934ba7eda1b0fba6e3d64331faaa4fc2ef4cf10a2b24f1b566088adc64695"} Feb 19 03:34:59.026586 master-0 kubenswrapper[33867]: I0219 03:34:59.019634 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-zsfln" podStartSLOduration=3.019601551 podStartE2EDuration="3.019601551s" podCreationTimestamp="2026-02-19 03:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:34:58.984858189 +0000 UTC m=+704.281528800" watchObservedRunningTime="2026-02-19 03:34:59.019601551 +0000 UTC m=+704.316272162" Feb 19 03:35:08.056213 master-0 kubenswrapper[33867]: I0219 03:35:08.056128 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc" Feb 19 03:35:09.091439 master-0 kubenswrapper[33867]: I0219 03:35:09.091347 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" event={"ID":"bd1e7bd3-1330-47ed-9c89-ad6ff6ffbeb3","Type":"ContainerStarted","Data":"26f8698bdf539e762145e5c800e1b2f6726ecd52a16b9aa068a229e5f1f602e0"} Feb 19 03:35:09.092325 master-0 kubenswrapper[33867]: I0219 03:35:09.091560 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:35:09.094419 master-0 kubenswrapper[33867]: I0219 03:35:09.094376 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" event={"ID":"32f213a4-b7ac-460b-a749-f03007e0b532","Type":"ContainerStarted","Data":"c060a399625618a559d82832a05f7d1c5afa3ad42f18edb94b7579df3c031801"} Feb 19 03:35:09.096292 master-0 kubenswrapper[33867]: I0219 03:35:09.096213 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" event={"ID":"1612cd6b-b986-4763-bda3-16c58fb2ce66","Type":"ContainerStarted","Data":"9caf2bbad7b3cc9b6ca35764c305d3ce617683a4098c50eb166cc449417cf687"} Feb 19 03:35:09.099545 master-0 kubenswrapper[33867]: I0219 03:35:09.099488 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" event={"ID":"bd6f3f4b-1c8c-4b9a-bb70-fa26e2aa0bd9","Type":"ContainerStarted","Data":"61efde3f3d3d91b80e40af54e2c0da7c553ab717f043781ec569ea662b9d00ca"} Feb 19 03:35:09.110453 master-0 kubenswrapper[33867]: I0219 03:35:09.110388 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" event={"ID":"b9feaa19-6cd5-457b-ad70-36fe90ac8419","Type":"ContainerStarted","Data":"04643de64818677ec45e617ce824dfbc27cf1686a84ccab4bc2ced7827fecd8f"} Feb 19 03:35:09.111645 master-0 kubenswrapper[33867]: I0219 03:35:09.111597 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:35:09.116346 master-0 kubenswrapper[33867]: I0219 03:35:09.116288 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" Feb 19 03:35:09.139153 master-0 kubenswrapper[33867]: I0219 03:35:09.130403 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" podStartSLOduration=2.442712106 podStartE2EDuration="13.130372029s" podCreationTimestamp="2026-02-19 03:34:56 +0000 UTC" firstStartedPulling="2026-02-19 03:34:57.098016488 +0000 UTC m=+702.394687099" lastFinishedPulling="2026-02-19 03:35:07.785676411 +0000 UTC m=+713.082347022" observedRunningTime="2026-02-19 03:35:09.121826097 +0000 UTC m=+714.418496708" watchObservedRunningTime="2026-02-19 03:35:09.130372029 +0000 UTC m=+714.427042640" Feb 19 03:35:09.147358 master-0 kubenswrapper[33867]: I0219 03:35:09.146231 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg" podStartSLOduration=3.06708328 podStartE2EDuration="14.146198396s" podCreationTimestamp="2026-02-19 03:34:55 +0000 UTC" firstStartedPulling="2026-02-19 03:34:56.685419527 +0000 UTC m=+701.982090138" lastFinishedPulling="2026-02-19 03:35:07.764534643 +0000 UTC m=+713.061205254" observedRunningTime="2026-02-19 03:35:09.139484586 +0000 UTC m=+714.436155217" watchObservedRunningTime="2026-02-19 03:35:09.146198396 +0000 UTC m=+714.442869007" Feb 19 03:35:09.187933 master-0 kubenswrapper[33867]: I0219 03:35:09.186576 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz" podStartSLOduration=2.483419692 podStartE2EDuration="14.186538126s" podCreationTimestamp="2026-02-19 03:34:55 +0000 UTC" firstStartedPulling="2026-02-19 03:34:56.084606615 +0000 UTC m=+701.381277236" lastFinishedPulling="2026-02-19 03:35:07.787725059 +0000 UTC m=+713.084395670" observedRunningTime="2026-02-19 03:35:09.175452673 +0000 UTC m=+714.472123294" watchObservedRunningTime="2026-02-19 03:35:09.186538126 +0000 UTC m=+714.483208737" Feb 19 03:35:09.243361 master-0 kubenswrapper[33867]: I0219 03:35:09.240603 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pkxns" podStartSLOduration=3.249913726 podStartE2EDuration="14.240570454s" podCreationTimestamp="2026-02-19 03:34:55 +0000 UTC" firstStartedPulling="2026-02-19 03:34:56.82992569 +0000 UTC m=+702.126596301" lastFinishedPulling="2026-02-19 03:35:07.820582418 +0000 UTC m=+713.117253029" observedRunningTime="2026-02-19 03:35:09.216779191 +0000 UTC m=+714.513449822" watchObservedRunningTime="2026-02-19 03:35:09.240570454 +0000 UTC m=+714.537241065" Feb 19 03:35:09.289121 master-0 kubenswrapper[33867]: I0219 03:35:09.288841 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq" podStartSLOduration=3.084562323 podStartE2EDuration="14.288814447s" podCreationTimestamp="2026-02-19 03:34:55 +0000 UTC" firstStartedPulling="2026-02-19 03:34:56.553638062 +0000 UTC m=+701.850308663" lastFinishedPulling="2026-02-19 03:35:07.757890176 +0000 UTC m=+713.054560787" observedRunningTime="2026-02-19 03:35:09.283049934 +0000 UTC m=+714.579720555" watchObservedRunningTime="2026-02-19 03:35:09.288814447 +0000 UTC m=+714.585485058" Feb 19 03:35:16.481295 master-0 kubenswrapper[33867]: I0219 03:35:16.480588 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-l6q7n" Feb 19 03:35:16.971198 master-0 kubenswrapper[33867]: I0219 03:35:16.971139 33867 scope.go:117] "RemoveContainer" containerID="2bfdd08c2f9d5dd55aca73518d58b45204430b97a64cd8f23d4d0084858c4cc5" Feb 19 03:35:27.427817 master-0 kubenswrapper[33867]: I0219 03:35:27.427731 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk" Feb 19 03:35:43.028584 master-0 kubenswrapper[33867]: I0219 03:35:43.025902 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6"] Feb 19 03:35:43.028584 master-0 kubenswrapper[33867]: I0219 03:35:43.027732 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.031508 master-0 kubenswrapper[33867]: I0219 03:35:43.031407 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 03:35:43.036517 master-0 kubenswrapper[33867]: I0219 03:35:43.036353 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8rx68"] Feb 19 03:35:43.041744 master-0 kubenswrapper[33867]: I0219 03:35:43.041053 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.047313 master-0 kubenswrapper[33867]: I0219 03:35:43.044766 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 03:35:43.047313 master-0 kubenswrapper[33867]: I0219 03:35:43.046425 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 03:35:43.047437 master-0 kubenswrapper[33867]: I0219 03:35:43.047390 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6"] Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141785 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141846 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgr8\" (UniqueName: \"kubernetes.io/projected/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-kube-api-access-dwgr8\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141894 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141913 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfxd\" (UniqueName: \"kubernetes.io/projected/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-kube-api-access-fnfxd\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141950 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-reloader\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.141978 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-startup\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.142010 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-conf\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.142054 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-sockets\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.142092 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.153293 master-0 kubenswrapper[33867]: I0219 03:35:43.146229 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-psdfl"] Feb 19 03:35:43.163209 master-0 kubenswrapper[33867]: I0219 03:35:43.159878 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.163209 master-0 kubenswrapper[33867]: I0219 03:35:43.162901 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 03:35:43.163209 master-0 kubenswrapper[33867]: I0219 03:35:43.163134 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 03:35:43.163810 master-0 kubenswrapper[33867]: I0219 03:35:43.163747 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 03:35:43.191498 master-0 kubenswrapper[33867]: I0219 03:35:43.191442 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-mn6gp"] Feb 19 03:35:43.194453 master-0 kubenswrapper[33867]: I0219 03:35:43.194425 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.206296 master-0 kubenswrapper[33867]: I0219 03:35:43.203446 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 03:35:43.246540 master-0 kubenswrapper[33867]: I0219 03:35:43.246440 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.246540 master-0 kubenswrapper[33867]: I0219 03:35:43.246525 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgr8\" (UniqueName: \"kubernetes.io/projected/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-kube-api-access-dwgr8\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.246977 master-0 kubenswrapper[33867]: I0219 03:35:43.246578 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.246977 master-0 kubenswrapper[33867]: I0219 03:35:43.246616 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.246977 master-0 kubenswrapper[33867]: I0219 03:35:43.246646 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfxd\" (UniqueName: \"kubernetes.io/projected/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-kube-api-access-fnfxd\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.246977 master-0 kubenswrapper[33867]: E0219 03:35:43.246929 33867 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 19 03:35:43.247163 master-0 kubenswrapper[33867]: E0219 03:35:43.247001 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert podName:22564019-4f1e-40cb-a6d2-b6ac86a13ca1 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:43.746974826 +0000 UTC m=+749.043645447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert") pod "frr-k8s-webhook-server-78b44bf5bb-n7lx6" (UID: "22564019-4f1e-40cb-a6d2-b6ac86a13ca1") : secret "frr-k8s-webhook-server-cert" not found Feb 19 03:35:43.247163 master-0 kubenswrapper[33867]: I0219 03:35:43.247107 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-reloader\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.250106 master-0 kubenswrapper[33867]: I0219 03:35:43.250055 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5d5k\" (UniqueName: \"kubernetes.io/projected/ce9b802d-6caa-4b6e-9d4d-72b056257685-kube-api-access-h5d5k\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.250362 master-0 kubenswrapper[33867]: I0219 03:35:43.250335 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-startup\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.250584 master-0 kubenswrapper[33867]: I0219 03:35:43.250541 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-reloader\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.250650 master-0 kubenswrapper[33867]: I0219 03:35:43.250560 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-conf\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.250826 master-0 kubenswrapper[33867]: I0219 03:35:43.250786 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.250905 master-0 kubenswrapper[33867]: I0219 03:35:43.250871 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9b802d-6caa-4b6e-9d4d-72b056257685-metallb-excludel2\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.251012 master-0 kubenswrapper[33867]: I0219 03:35:43.250987 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-sockets\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.251064 master-0 kubenswrapper[33867]: I0219 03:35:43.251029 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-cert\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.251165 master-0 kubenswrapper[33867]: I0219 03:35:43.251143 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.251251 master-0 kubenswrapper[33867]: I0219 03:35:43.251188 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.251346 master-0 kubenswrapper[33867]: E0219 03:35:43.251312 33867 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 03:35:43.251439 master-0 kubenswrapper[33867]: E0219 03:35:43.251411 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs podName:2877ad48-bf75-4a75-b6ca-8f48f0ede5df nodeName:}" failed. No retries permitted until 2026-02-19 03:35:43.751381091 +0000 UTC m=+749.048051702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs") pod "frr-k8s-8rx68" (UID: "2877ad48-bf75-4a75-b6ca-8f48f0ede5df") : secret "frr-k8s-certs-secret" not found Feb 19 03:35:43.251518 master-0 kubenswrapper[33867]: I0219 03:35:43.251458 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-metrics-certs\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.251518 master-0 kubenswrapper[33867]: I0219 03:35:43.251503 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwft\" (UniqueName: \"kubernetes.io/projected/c002fdf0-badd-4f0d-b300-460fb9a65d89-kube-api-access-qpwft\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.251810 master-0 kubenswrapper[33867]: I0219 03:35:43.251792 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-conf\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.252034 master-0 kubenswrapper[33867]: I0219 03:35:43.251977 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-sockets\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.252292 master-0 kubenswrapper[33867]: I0219 03:35:43.252175 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-frr-startup\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.263598 master-0 kubenswrapper[33867]: I0219 03:35:43.262486 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mn6gp"] Feb 19 03:35:43.290566 master-0 kubenswrapper[33867]: I0219 03:35:43.287406 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgr8\" (UniqueName: \"kubernetes.io/projected/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-kube-api-access-dwgr8\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.309282 master-0 kubenswrapper[33867]: I0219 03:35:43.305358 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfxd\" (UniqueName: \"kubernetes.io/projected/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-kube-api-access-fnfxd\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.354307 master-0 kubenswrapper[33867]: I0219 03:35:43.354176 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-metrics-certs\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.354307 master-0 kubenswrapper[33867]: I0219 03:35:43.354296 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpwft\" (UniqueName: \"kubernetes.io/projected/c002fdf0-badd-4f0d-b300-460fb9a65d89-kube-api-access-qpwft\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.354599 master-0 kubenswrapper[33867]: I0219 03:35:43.354412 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.354599 master-0 kubenswrapper[33867]: I0219 03:35:43.354467 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5d5k\" (UniqueName: \"kubernetes.io/projected/ce9b802d-6caa-4b6e-9d4d-72b056257685-kube-api-access-h5d5k\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.354683 master-0 kubenswrapper[33867]: I0219 03:35:43.354664 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9b802d-6caa-4b6e-9d4d-72b056257685-metallb-excludel2\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.354740 master-0 kubenswrapper[33867]: I0219 03:35:43.354716 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-cert\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.354852 master-0 kubenswrapper[33867]: I0219 03:35:43.354819 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.355825 master-0 kubenswrapper[33867]: E0219 03:35:43.355784 33867 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 03:35:43.355906 master-0 kubenswrapper[33867]: E0219 03:35:43.355873 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist podName:ce9b802d-6caa-4b6e-9d4d-72b056257685 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:43.855846252 +0000 UTC m=+749.152517063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist") pod "speaker-psdfl" (UID: "ce9b802d-6caa-4b6e-9d4d-72b056257685") : secret "metallb-memberlist" not found Feb 19 03:35:43.356201 master-0 kubenswrapper[33867]: E0219 03:35:43.356149 33867 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 03:35:43.356319 master-0 kubenswrapper[33867]: E0219 03:35:43.356291 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs podName:c002fdf0-badd-4f0d-b300-460fb9a65d89 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:43.856241654 +0000 UTC m=+749.152912455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs") pod "controller-69bbfbf88f-mn6gp" (UID: "c002fdf0-badd-4f0d-b300-460fb9a65d89") : secret "controller-certs-secret" not found Feb 19 03:35:43.356973 master-0 kubenswrapper[33867]: I0219 03:35:43.356937 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9b802d-6caa-4b6e-9d4d-72b056257685-metallb-excludel2\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.358715 master-0 kubenswrapper[33867]: I0219 03:35:43.358673 33867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 03:35:43.358853 master-0 kubenswrapper[33867]: I0219 03:35:43.358827 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-metrics-certs\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.374524 master-0 kubenswrapper[33867]: I0219 03:35:43.374443 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-cert\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.379266 master-0 kubenswrapper[33867]: I0219 03:35:43.379196 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpwft\" (UniqueName: \"kubernetes.io/projected/c002fdf0-badd-4f0d-b300-460fb9a65d89-kube-api-access-qpwft\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.380455 master-0 kubenswrapper[33867]: I0219 03:35:43.380388 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5d5k\" (UniqueName: \"kubernetes.io/projected/ce9b802d-6caa-4b6e-9d4d-72b056257685-kube-api-access-h5d5k\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.764702 master-0 kubenswrapper[33867]: I0219 03:35:43.764615 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.765047 master-0 kubenswrapper[33867]: I0219 03:35:43.764771 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.769347 master-0 kubenswrapper[33867]: I0219 03:35:43.769289 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22564019-4f1e-40cb-a6d2-b6ac86a13ca1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n7lx6\" (UID: \"22564019-4f1e-40cb-a6d2-b6ac86a13ca1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:43.775967 master-0 kubenswrapper[33867]: I0219 03:35:43.775907 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2877ad48-bf75-4a75-b6ca-8f48f0ede5df-metrics-certs\") pod \"frr-k8s-8rx68\" (UID: \"2877ad48-bf75-4a75-b6ca-8f48f0ede5df\") " pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:43.865900 master-0 kubenswrapper[33867]: I0219 03:35:43.865811 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:43.866377 master-0 kubenswrapper[33867]: I0219 03:35:43.865947 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:43.866377 master-0 kubenswrapper[33867]: E0219 03:35:43.865990 33867 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 03:35:43.866377 master-0 kubenswrapper[33867]: E0219 03:35:43.866071 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist podName:ce9b802d-6caa-4b6e-9d4d-72b056257685 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:44.866048493 +0000 UTC m=+750.162719104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist") pod "speaker-psdfl" (UID: "ce9b802d-6caa-4b6e-9d4d-72b056257685") : secret "metallb-memberlist" not found Feb 19 03:35:43.871639 master-0 kubenswrapper[33867]: I0219 03:35:43.871552 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c002fdf0-badd-4f0d-b300-460fb9a65d89-metrics-certs\") pod \"controller-69bbfbf88f-mn6gp\" (UID: \"c002fdf0-badd-4f0d-b300-460fb9a65d89\") " pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:44.022192 master-0 kubenswrapper[33867]: I0219 03:35:44.021964 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:44.035394 master-0 kubenswrapper[33867]: I0219 03:35:44.035306 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:44.139629 master-0 kubenswrapper[33867]: I0219 03:35:44.139443 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:44.211517 master-0 kubenswrapper[33867]: I0219 03:35:44.211455 33867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:35:44.445434 master-0 kubenswrapper[33867]: I0219 03:35:44.445332 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"a82e307bea63b66272002e881535411c2f4623716bc5459a92c658bf7bcd8f48"} Feb 19 03:35:44.472140 master-0 kubenswrapper[33867]: I0219 03:35:44.471435 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6"] Feb 19 03:35:44.654866 master-0 kubenswrapper[33867]: I0219 03:35:44.654792 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mn6gp"] Feb 19 03:35:44.655591 master-0 kubenswrapper[33867]: W0219 03:35:44.655545 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc002fdf0_badd_4f0d_b300_460fb9a65d89.slice/crio-83ee75f7980004d95343a1ba1d160b3f0ac5da213736fc6a39c70914b7042b70 WatchSource:0}: Error finding container 83ee75f7980004d95343a1ba1d160b3f0ac5da213736fc6a39c70914b7042b70: Status 404 returned error can't find the container with id 83ee75f7980004d95343a1ba1d160b3f0ac5da213736fc6a39c70914b7042b70 Feb 19 03:35:44.887439 master-0 kubenswrapper[33867]: I0219 03:35:44.887363 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:44.887821 master-0 kubenswrapper[33867]: E0219 03:35:44.887616 33867 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 03:35:44.887821 master-0 kubenswrapper[33867]: E0219 03:35:44.887686 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist podName:ce9b802d-6caa-4b6e-9d4d-72b056257685 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:46.887665519 +0000 UTC m=+752.184336130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist") pod "speaker-psdfl" (UID: "ce9b802d-6caa-4b6e-9d4d-72b056257685") : secret "metallb-memberlist" not found Feb 19 03:35:45.206546 master-0 kubenswrapper[33867]: I0219 03:35:45.205735 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd"] Feb 19 03:35:45.216148 master-0 kubenswrapper[33867]: I0219 03:35:45.215802 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" Feb 19 03:35:45.227192 master-0 kubenswrapper[33867]: I0219 03:35:45.225028 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd"] Feb 19 03:35:45.246201 master-0 kubenswrapper[33867]: I0219 03:35:45.246137 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4"] Feb 19 03:35:45.249061 master-0 kubenswrapper[33867]: I0219 03:35:45.248114 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.254781 master-0 kubenswrapper[33867]: I0219 03:35:45.254726 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 03:35:45.261480 master-0 kubenswrapper[33867]: I0219 03:35:45.257755 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vjzqq"] Feb 19 03:35:45.262854 master-0 kubenswrapper[33867]: I0219 03:35:45.262806 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.269074 master-0 kubenswrapper[33867]: I0219 03:35:45.269012 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4"] Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297087 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpgf\" (UniqueName: \"kubernetes.io/projected/72a71435-3d39-4b6c-9c20-76deaf9da6fe-kube-api-access-7jpgf\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297164 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-nmstate-lock\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297192 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9zm\" (UniqueName: \"kubernetes.io/projected/72878d47-67e4-4070-906c-a3749e8120f9-kube-api-access-mj9zm\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297224 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-dbus-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297272 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjhd\" (UniqueName: \"kubernetes.io/projected/04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3-kube-api-access-lmjhd\") pod \"nmstate-metrics-58c85c668d-fbnqd\" (UID: \"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297320 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-ovs-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.298453 master-0 kubenswrapper[33867]: I0219 03:35:45.297368 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400372 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400510 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jpgf\" (UniqueName: \"kubernetes.io/projected/72a71435-3d39-4b6c-9c20-76deaf9da6fe-kube-api-access-7jpgf\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400538 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-nmstate-lock\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400566 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9zm\" (UniqueName: \"kubernetes.io/projected/72878d47-67e4-4070-906c-a3749e8120f9-kube-api-access-mj9zm\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400592 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-dbus-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400620 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjhd\" (UniqueName: \"kubernetes.io/projected/04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3-kube-api-access-lmjhd\") pod \"nmstate-metrics-58c85c668d-fbnqd\" (UID: \"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400664 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-ovs-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.400780 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-ovs-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: E0219 03:35:45.401537 33867 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: E0219 03:35:45.401617 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair podName:72878d47-67e4-4070-906c-a3749e8120f9 nodeName:}" failed. No retries permitted until 2026-02-19 03:35:45.901594505 +0000 UTC m=+751.198265116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair") pod "nmstate-webhook-866bcb46dc-47dd4" (UID: "72878d47-67e4-4070-906c-a3749e8120f9") : secret "openshift-nmstate-webhook" not found Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.402411 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-dbus-socket\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.409390 master-0 kubenswrapper[33867]: I0219 03:35:45.403716 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/72a71435-3d39-4b6c-9c20-76deaf9da6fe-nmstate-lock\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.413384 master-0 kubenswrapper[33867]: I0219 03:35:45.411836 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v"] Feb 19 03:35:45.431997 master-0 kubenswrapper[33867]: I0219 03:35:45.413491 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.431997 master-0 kubenswrapper[33867]: I0219 03:35:45.416209 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 03:35:45.431997 master-0 kubenswrapper[33867]: I0219 03:35:45.416443 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 03:35:45.431997 master-0 kubenswrapper[33867]: I0219 03:35:45.428202 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jpgf\" (UniqueName: \"kubernetes.io/projected/72a71435-3d39-4b6c-9c20-76deaf9da6fe-kube-api-access-7jpgf\") pod \"nmstate-handler-vjzqq\" (UID: \"72a71435-3d39-4b6c-9c20-76deaf9da6fe\") " pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.431997 master-0 kubenswrapper[33867]: I0219 03:35:45.428224 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjhd\" (UniqueName: \"kubernetes.io/projected/04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3-kube-api-access-lmjhd\") pod \"nmstate-metrics-58c85c668d-fbnqd\" (UID: \"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" Feb 19 03:35:45.444222 master-0 kubenswrapper[33867]: I0219 03:35:45.444139 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9zm\" (UniqueName: \"kubernetes.io/projected/72878d47-67e4-4070-906c-a3749e8120f9-kube-api-access-mj9zm\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.456123 master-0 kubenswrapper[33867]: I0219 03:35:45.456058 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v"] Feb 19 03:35:45.467406 master-0 kubenswrapper[33867]: I0219 03:35:45.467174 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mn6gp" event={"ID":"c002fdf0-badd-4f0d-b300-460fb9a65d89","Type":"ContainerStarted","Data":"d617ee61c6f60c29fe9f42e0c2e2460a006da0923f9783cc0c6a7488711497ae"} Feb 19 03:35:45.467406 master-0 kubenswrapper[33867]: I0219 03:35:45.467291 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mn6gp" event={"ID":"c002fdf0-badd-4f0d-b300-460fb9a65d89","Type":"ContainerStarted","Data":"83ee75f7980004d95343a1ba1d160b3f0ac5da213736fc6a39c70914b7042b70"} Feb 19 03:35:45.480806 master-0 kubenswrapper[33867]: I0219 03:35:45.480716 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" event={"ID":"22564019-4f1e-40cb-a6d2-b6ac86a13ca1","Type":"ContainerStarted","Data":"0890edd4123646dee22e4f06758601d7c49957f0ebcb4b71ee8339f7db743c3b"} Feb 19 03:35:45.567021 master-0 kubenswrapper[33867]: I0219 03:35:45.566937 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" Feb 19 03:35:45.616852 master-0 kubenswrapper[33867]: I0219 03:35:45.616752 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.617128 master-0 kubenswrapper[33867]: I0219 03:35:45.616934 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.617483 master-0 kubenswrapper[33867]: I0219 03:35:45.617441 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvj2\" (UniqueName: \"kubernetes.io/projected/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-kube-api-access-bjvj2\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.617988 master-0 kubenswrapper[33867]: I0219 03:35:45.617945 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:45.644570 master-0 kubenswrapper[33867]: I0219 03:35:45.644460 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84fb999cb7-wzrtl"] Feb 19 03:35:45.649401 master-0 kubenswrapper[33867]: I0219 03:35:45.646843 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.656108 master-0 kubenswrapper[33867]: I0219 03:35:45.655981 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84fb999cb7-wzrtl"] Feb 19 03:35:45.734247 master-0 kubenswrapper[33867]: I0219 03:35:45.733991 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvj2\" (UniqueName: \"kubernetes.io/projected/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-kube-api-access-bjvj2\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.734247 master-0 kubenswrapper[33867]: I0219 03:35:45.734137 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.734247 master-0 kubenswrapper[33867]: I0219 03:35:45.734224 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.735677 master-0 kubenswrapper[33867]: I0219 03:35:45.735646 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.739010 master-0 kubenswrapper[33867]: I0219 03:35:45.738958 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.770596 master-0 kubenswrapper[33867]: I0219 03:35:45.765142 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvj2\" (UniqueName: \"kubernetes.io/projected/f17a36fe-e4a5-4651-b03e-f4b9741b5ad1-kube-api-access-bjvj2\") pod \"nmstate-console-plugin-5c78fc5d65-5zg2v\" (UID: \"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.802894 master-0 kubenswrapper[33867]: I0219 03:35:45.802824 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" Feb 19 03:35:45.841116 master-0 kubenswrapper[33867]: I0219 03:35:45.841040 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-service-ca\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.841229 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-console-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.841400 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-oauth-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.841602 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-trusted-ca-bundle\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.841731 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-oauth-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.841870 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79w4n\" (UniqueName: \"kubernetes.io/projected/0c4eb386-9996-4d66-affc-b9a55882cc66-kube-api-access-79w4n\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.842911 master-0 kubenswrapper[33867]: I0219 03:35:45.842000 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944240 master-0 kubenswrapper[33867]: I0219 03:35:45.944146 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944310 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-service-ca\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944345 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-console-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944372 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-oauth-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944417 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-trusted-ca-bundle\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944456 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-oauth-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944487 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79w4n\" (UniqueName: \"kubernetes.io/projected/0c4eb386-9996-4d66-affc-b9a55882cc66-kube-api-access-79w4n\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.944569 master-0 kubenswrapper[33867]: I0219 03:35:45.944527 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.948195 master-0 kubenswrapper[33867]: I0219 03:35:45.948140 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-oauth-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.948855 master-0 kubenswrapper[33867]: I0219 03:35:45.948793 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-service-ca\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.948855 master-0 kubenswrapper[33867]: I0219 03:35:45.948808 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-trusted-ca-bundle\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.949587 master-0 kubenswrapper[33867]: I0219 03:35:45.949545 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c4eb386-9996-4d66-affc-b9a55882cc66-console-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.950486 master-0 kubenswrapper[33867]: I0219 03:35:45.950450 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72878d47-67e4-4070-906c-a3749e8120f9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-47dd4\" (UID: \"72878d47-67e4-4070-906c-a3749e8120f9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:45.954371 master-0 kubenswrapper[33867]: I0219 03:35:45.954307 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-serving-cert\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.954944 master-0 kubenswrapper[33867]: I0219 03:35:45.954890 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c4eb386-9996-4d66-affc-b9a55882cc66-console-oauth-config\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.971947 master-0 kubenswrapper[33867]: I0219 03:35:45.971594 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79w4n\" (UniqueName: \"kubernetes.io/projected/0c4eb386-9996-4d66-affc-b9a55882cc66-kube-api-access-79w4n\") pod \"console-84fb999cb7-wzrtl\" (UID: \"0c4eb386-9996-4d66-affc-b9a55882cc66\") " pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:45.997097 master-0 kubenswrapper[33867]: I0219 03:35:45.997003 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:46.128940 master-0 kubenswrapper[33867]: I0219 03:35:46.126360 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd"] Feb 19 03:35:46.195757 master-0 kubenswrapper[33867]: I0219 03:35:46.195686 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:46.283295 master-0 kubenswrapper[33867]: I0219 03:35:46.282666 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v"] Feb 19 03:35:46.291296 master-0 kubenswrapper[33867]: W0219 03:35:46.291202 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf17a36fe_e4a5_4651_b03e_f4b9741b5ad1.slice/crio-881d8f628945d08dd7ceb9e17254e375a153cea59f6c3d3d095254e3a32a6c82 WatchSource:0}: Error finding container 881d8f628945d08dd7ceb9e17254e375a153cea59f6c3d3d095254e3a32a6c82: Status 404 returned error can't find the container with id 881d8f628945d08dd7ceb9e17254e375a153cea59f6c3d3d095254e3a32a6c82 Feb 19 03:35:46.563342 master-0 kubenswrapper[33867]: I0219 03:35:46.560679 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" event={"ID":"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1","Type":"ContainerStarted","Data":"881d8f628945d08dd7ceb9e17254e375a153cea59f6c3d3d095254e3a32a6c82"} Feb 19 03:35:46.587406 master-0 kubenswrapper[33867]: I0219 03:35:46.587352 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjzqq" event={"ID":"72a71435-3d39-4b6c-9c20-76deaf9da6fe","Type":"ContainerStarted","Data":"f8e6e5836a278c913bb287ae6c851be14df3acd0ffdd7baeb3bd00ae1b6ef708"} Feb 19 03:35:46.589161 master-0 kubenswrapper[33867]: I0219 03:35:46.589128 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" event={"ID":"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3","Type":"ContainerStarted","Data":"04456a736ef2ad1bea2e38fe2dc33cd5d8c17649a6e8657a9f174ee94c263844"} Feb 19 03:35:46.594066 master-0 kubenswrapper[33867]: I0219 03:35:46.594014 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mn6gp" event={"ID":"c002fdf0-badd-4f0d-b300-460fb9a65d89","Type":"ContainerStarted","Data":"e7b4d4021c137e3c0118a0eb44d0f448a7509024a1a275c37db4f9b4bd4e026e"} Feb 19 03:35:46.595638 master-0 kubenswrapper[33867]: I0219 03:35:46.595580 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:46.604869 master-0 kubenswrapper[33867]: I0219 03:35:46.604817 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84fb999cb7-wzrtl"] Feb 19 03:35:46.637008 master-0 kubenswrapper[33867]: I0219 03:35:46.636930 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-mn6gp" podStartSLOduration=2.503045383 podStartE2EDuration="3.636906091s" podCreationTimestamp="2026-02-19 03:35:43 +0000 UTC" firstStartedPulling="2026-02-19 03:35:44.844729876 +0000 UTC m=+750.141400487" lastFinishedPulling="2026-02-19 03:35:45.978590584 +0000 UTC m=+751.275261195" observedRunningTime="2026-02-19 03:35:46.629469651 +0000 UTC m=+751.926140272" watchObservedRunningTime="2026-02-19 03:35:46.636906091 +0000 UTC m=+751.933576692" Feb 19 03:35:46.690868 master-0 kubenswrapper[33867]: I0219 03:35:46.690792 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4"] Feb 19 03:35:46.916573 master-0 kubenswrapper[33867]: I0219 03:35:46.916485 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:46.920647 master-0 kubenswrapper[33867]: I0219 03:35:46.920588 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9b802d-6caa-4b6e-9d4d-72b056257685-memberlist\") pod \"speaker-psdfl\" (UID: \"ce9b802d-6caa-4b6e-9d4d-72b056257685\") " pod="metallb-system/speaker-psdfl" Feb 19 03:35:47.114453 master-0 kubenswrapper[33867]: I0219 03:35:47.114116 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-psdfl" Feb 19 03:35:47.158943 master-0 kubenswrapper[33867]: W0219 03:35:47.158865 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9b802d_6caa_4b6e_9d4d_72b056257685.slice/crio-2c025d20bf1b500cbc940f3272c9e8721a9b04f775a99274a25cf623ed2ac00b WatchSource:0}: Error finding container 2c025d20bf1b500cbc940f3272c9e8721a9b04f775a99274a25cf623ed2ac00b: Status 404 returned error can't find the container with id 2c025d20bf1b500cbc940f3272c9e8721a9b04f775a99274a25cf623ed2ac00b Feb 19 03:35:47.627000 master-0 kubenswrapper[33867]: I0219 03:35:47.623421 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psdfl" event={"ID":"ce9b802d-6caa-4b6e-9d4d-72b056257685","Type":"ContainerStarted","Data":"fae447f11164d7a5ce660ea8c2af3d47f9758ba9f88c359ec8927590e59d9037"} Feb 19 03:35:47.627000 master-0 kubenswrapper[33867]: I0219 03:35:47.623512 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psdfl" event={"ID":"ce9b802d-6caa-4b6e-9d4d-72b056257685","Type":"ContainerStarted","Data":"2c025d20bf1b500cbc940f3272c9e8721a9b04f775a99274a25cf623ed2ac00b"} Feb 19 03:35:47.628522 master-0 kubenswrapper[33867]: I0219 03:35:47.628460 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" event={"ID":"72878d47-67e4-4070-906c-a3749e8120f9","Type":"ContainerStarted","Data":"b65c6a7ecfea01c5b4b2328aee2e22e5315867e1cd48487b755dec6e8833c2f2"} Feb 19 03:35:47.631609 master-0 kubenswrapper[33867]: I0219 03:35:47.631442 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fb999cb7-wzrtl" event={"ID":"0c4eb386-9996-4d66-affc-b9a55882cc66","Type":"ContainerStarted","Data":"2bcabcf10bcab99421319b9d0278d6ff96a08c550a66750aac23ed1510d9200c"} Feb 19 03:35:47.631609 master-0 kubenswrapper[33867]: I0219 03:35:47.631475 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fb999cb7-wzrtl" event={"ID":"0c4eb386-9996-4d66-affc-b9a55882cc66","Type":"ContainerStarted","Data":"156f5dbe44aee145f03172f0f15abbab2b8232e5e608017f7e9d986ad08cc8f7"} Feb 19 03:35:47.660916 master-0 kubenswrapper[33867]: I0219 03:35:47.660811 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84fb999cb7-wzrtl" podStartSLOduration=2.66079001 podStartE2EDuration="2.66079001s" podCreationTimestamp="2026-02-19 03:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:35:47.659674589 +0000 UTC m=+752.956345210" watchObservedRunningTime="2026-02-19 03:35:47.66079001 +0000 UTC m=+752.957460621" Feb 19 03:35:48.647532 master-0 kubenswrapper[33867]: I0219 03:35:48.647426 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psdfl" event={"ID":"ce9b802d-6caa-4b6e-9d4d-72b056257685","Type":"ContainerStarted","Data":"24a291e95ac54f44424e29b23e89c913f2827c02190809a707fb06237ece9fd4"} Feb 19 03:35:48.649115 master-0 kubenswrapper[33867]: I0219 03:35:48.648186 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-psdfl" Feb 19 03:35:48.674836 master-0 kubenswrapper[33867]: I0219 03:35:48.674291 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-psdfl" podStartSLOduration=5.674266396 podStartE2EDuration="5.674266396s" podCreationTimestamp="2026-02-19 03:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:35:48.672527727 +0000 UTC m=+753.969198338" watchObservedRunningTime="2026-02-19 03:35:48.674266396 +0000 UTC m=+753.970937007" Feb 19 03:35:52.698634 master-0 kubenswrapper[33867]: I0219 03:35:52.698564 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" event={"ID":"72878d47-67e4-4070-906c-a3749e8120f9","Type":"ContainerStarted","Data":"8a5e74c56fec06c80bd543f3297eb57f9e9c9d2031b36542ee1197b88740eb5d"} Feb 19 03:35:52.699361 master-0 kubenswrapper[33867]: I0219 03:35:52.698647 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:35:52.700891 master-0 kubenswrapper[33867]: I0219 03:35:52.700816 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" event={"ID":"f17a36fe-e4a5-4651-b03e-f4b9741b5ad1","Type":"ContainerStarted","Data":"49ac8f9878c76ea67b3e80ba04d22e03e8525438acc574cb968746d7c852b548"} Feb 19 03:35:52.706279 master-0 kubenswrapper[33867]: I0219 03:35:52.706156 33867 generic.go:334] "Generic (PLEG): container finished" podID="2877ad48-bf75-4a75-b6ca-8f48f0ede5df" containerID="ed31b4d5b64fd794a1166e762acf821259866a6d3ebcbeb146667bb3e161bfec" exitCode=0 Feb 19 03:35:52.707298 master-0 kubenswrapper[33867]: I0219 03:35:52.707244 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerDied","Data":"ed31b4d5b64fd794a1166e762acf821259866a6d3ebcbeb146667bb3e161bfec"} Feb 19 03:35:52.710435 master-0 kubenswrapper[33867]: I0219 03:35:52.710299 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjzqq" event={"ID":"72a71435-3d39-4b6c-9c20-76deaf9da6fe","Type":"ContainerStarted","Data":"8cf3b65768dd80b37ef9927ccf251bb98cf993c0ddeecb0250bdba91cd92719f"} Feb 19 03:35:52.711632 master-0 kubenswrapper[33867]: I0219 03:35:52.711598 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:35:52.714761 master-0 kubenswrapper[33867]: I0219 03:35:52.714725 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" event={"ID":"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3","Type":"ContainerStarted","Data":"a6da5ccfaa9faf50e40b9a13eb93d2088abbe534bb16961009c9af2d2287023c"} Feb 19 03:35:52.714827 master-0 kubenswrapper[33867]: I0219 03:35:52.714763 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" event={"ID":"04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3","Type":"ContainerStarted","Data":"0cde053f5b2fdd8925bc38311ebde570fa48f617a986392e85cad1101bba799e"} Feb 19 03:35:52.720686 master-0 kubenswrapper[33867]: I0219 03:35:52.720621 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" event={"ID":"22564019-4f1e-40cb-a6d2-b6ac86a13ca1","Type":"ContainerStarted","Data":"f74151cc54cb9cd2598872ea3fe926292e0e54b7656af8e81471a9f48a2eac09"} Feb 19 03:35:52.721189 master-0 kubenswrapper[33867]: I0219 03:35:52.721165 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:35:52.740342 master-0 kubenswrapper[33867]: I0219 03:35:52.739851 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" podStartSLOduration=2.869054718 podStartE2EDuration="7.739821348s" podCreationTimestamp="2026-02-19 03:35:45 +0000 UTC" firstStartedPulling="2026-02-19 03:35:46.710647026 +0000 UTC m=+752.007317637" lastFinishedPulling="2026-02-19 03:35:51.581413646 +0000 UTC m=+756.878084267" observedRunningTime="2026-02-19 03:35:52.727605733 +0000 UTC m=+758.024276344" watchObservedRunningTime="2026-02-19 03:35:52.739821348 +0000 UTC m=+758.036491959" Feb 19 03:35:52.783142 master-0 kubenswrapper[33867]: I0219 03:35:52.781121 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd" podStartSLOduration=2.297243675 podStartE2EDuration="7.781082214s" podCreationTimestamp="2026-02-19 03:35:45 +0000 UTC" firstStartedPulling="2026-02-19 03:35:46.116610095 +0000 UTC m=+751.413280696" lastFinishedPulling="2026-02-19 03:35:51.600448584 +0000 UTC m=+756.897119235" observedRunningTime="2026-02-19 03:35:52.758937558 +0000 UTC m=+758.055608169" watchObservedRunningTime="2026-02-19 03:35:52.781082214 +0000 UTC m=+758.077752825" Feb 19 03:35:52.799503 master-0 kubenswrapper[33867]: I0219 03:35:52.799382 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" podStartSLOduration=3.660571976 podStartE2EDuration="10.799345391s" podCreationTimestamp="2026-02-19 03:35:42 +0000 UTC" firstStartedPulling="2026-02-19 03:35:44.475312634 +0000 UTC m=+749.771983245" lastFinishedPulling="2026-02-19 03:35:51.614086029 +0000 UTC m=+756.910756660" observedRunningTime="2026-02-19 03:35:52.784963854 +0000 UTC m=+758.081634465" watchObservedRunningTime="2026-02-19 03:35:52.799345391 +0000 UTC m=+758.096016002" Feb 19 03:35:52.816705 master-0 kubenswrapper[33867]: I0219 03:35:52.816597 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vjzqq" podStartSLOduration=1.9938370600000002 podStartE2EDuration="7.816566367s" podCreationTimestamp="2026-02-19 03:35:45 +0000 UTC" firstStartedPulling="2026-02-19 03:35:45.762298561 +0000 UTC m=+751.058969172" lastFinishedPulling="2026-02-19 03:35:51.585027858 +0000 UTC m=+756.881698479" observedRunningTime="2026-02-19 03:35:52.808978113 +0000 UTC m=+758.105648724" watchObservedRunningTime="2026-02-19 03:35:52.816566367 +0000 UTC m=+758.113236978" Feb 19 03:35:52.849009 master-0 kubenswrapper[33867]: I0219 03:35:52.848515 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v" podStartSLOduration=2.563041819 podStartE2EDuration="7.848467649s" podCreationTimestamp="2026-02-19 03:35:45 +0000 UTC" firstStartedPulling="2026-02-19 03:35:46.300221075 +0000 UTC m=+751.596891686" lastFinishedPulling="2026-02-19 03:35:51.585646895 +0000 UTC m=+756.882317516" observedRunningTime="2026-02-19 03:35:52.833692111 +0000 UTC m=+758.130362722" watchObservedRunningTime="2026-02-19 03:35:52.848467649 +0000 UTC m=+758.145138260" Feb 19 03:35:53.736376 master-0 kubenswrapper[33867]: I0219 03:35:53.736303 33867 generic.go:334] "Generic (PLEG): container finished" podID="2877ad48-bf75-4a75-b6ca-8f48f0ede5df" containerID="19b569d2b8eb5f81944627072286bf8f901b1d69dede08320f1e1d5fae512b93" exitCode=0 Feb 19 03:35:53.737074 master-0 kubenswrapper[33867]: I0219 03:35:53.736463 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerDied","Data":"19b569d2b8eb5f81944627072286bf8f901b1d69dede08320f1e1d5fae512b93"} Feb 19 03:35:54.144041 master-0 kubenswrapper[33867]: I0219 03:35:54.143943 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-mn6gp" Feb 19 03:35:54.752789 master-0 kubenswrapper[33867]: I0219 03:35:54.752716 33867 generic.go:334] "Generic (PLEG): container finished" podID="2877ad48-bf75-4a75-b6ca-8f48f0ede5df" containerID="e9ddcfc9c61cde10da5f4db9a6bf0dac43ff03a710e3de9b5ea4b42c7a47e6e3" exitCode=0 Feb 19 03:35:54.753362 master-0 kubenswrapper[33867]: I0219 03:35:54.752793 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerDied","Data":"e9ddcfc9c61cde10da5f4db9a6bf0dac43ff03a710e3de9b5ea4b42c7a47e6e3"} Feb 19 03:35:55.767925 master-0 kubenswrapper[33867]: I0219 03:35:55.767869 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"42a1e0085c60e5340a104e990ff49d9add857c8c47e43480124f29e0316f638c"} Feb 19 03:35:55.768291 master-0 kubenswrapper[33867]: I0219 03:35:55.767932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"55a6733102917b1b15da82d4387f86b5cd26db4c90510b923f3c6baeacfda156"} Feb 19 03:35:55.768291 master-0 kubenswrapper[33867]: I0219 03:35:55.767943 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"fba53abbab0aaeb987c657e8f50fdeae46441e5245d174180551fe3085cd572f"} Feb 19 03:35:55.768291 master-0 kubenswrapper[33867]: I0219 03:35:55.767958 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"bb332dccb3e864d37310f69bc7bca61bb785585acf53abee7ad02159fd5ccab1"} Feb 19 03:35:55.768291 master-0 kubenswrapper[33867]: I0219 03:35:55.767966 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"99e7dacae7ce4fd42cb7871480dca8201cca0843a5a78bae1c77cd87a3208ea9"} Feb 19 03:35:55.999342 master-0 kubenswrapper[33867]: I0219 03:35:55.999245 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:55.999618 master-0 kubenswrapper[33867]: I0219 03:35:55.999369 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:56.008569 master-0 kubenswrapper[33867]: I0219 03:35:56.008516 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:56.780988 master-0 kubenswrapper[33867]: I0219 03:35:56.780906 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rx68" event={"ID":"2877ad48-bf75-4a75-b6ca-8f48f0ede5df","Type":"ContainerStarted","Data":"191d9bc85ca23a5fe01c20dcd40584adc62604be5c13a000d869709c56780ac7"} Feb 19 03:35:56.791589 master-0 kubenswrapper[33867]: I0219 03:35:56.791534 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84fb999cb7-wzrtl" Feb 19 03:35:56.825565 master-0 kubenswrapper[33867]: I0219 03:35:56.825422 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8rx68" podStartSLOduration=7.404029003 podStartE2EDuration="14.825384865s" podCreationTimestamp="2026-02-19 03:35:42 +0000 UTC" firstStartedPulling="2026-02-19 03:35:44.211370194 +0000 UTC m=+749.508040805" lastFinishedPulling="2026-02-19 03:35:51.632726026 +0000 UTC m=+756.929396667" observedRunningTime="2026-02-19 03:35:56.819991333 +0000 UTC m=+762.116661984" watchObservedRunningTime="2026-02-19 03:35:56.825384865 +0000 UTC m=+762.122055476" Feb 19 03:35:57.123481 master-0 kubenswrapper[33867]: I0219 03:35:57.121224 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-psdfl" Feb 19 03:35:57.591470 master-0 kubenswrapper[33867]: I0219 03:35:57.591381 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:35:57.791863 master-0 kubenswrapper[33867]: I0219 03:35:57.791778 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:59.036617 master-0 kubenswrapper[33867]: I0219 03:35:59.036527 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:35:59.076287 master-0 kubenswrapper[33867]: I0219 03:35:59.076215 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:36:00.657619 master-0 kubenswrapper[33867]: I0219 03:36:00.657545 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vjzqq" Feb 19 03:36:04.027417 master-0 kubenswrapper[33867]: I0219 03:36:04.027236 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6" Feb 19 03:36:04.043700 master-0 kubenswrapper[33867]: I0219 03:36:04.043611 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8rx68" Feb 19 03:36:06.203090 master-0 kubenswrapper[33867]: I0219 03:36:06.202998 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4" Feb 19 03:36:11.014037 master-0 kubenswrapper[33867]: I0219 03:36:11.013964 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-rmnn4"] Feb 19 03:36:11.015614 master-0 kubenswrapper[33867]: I0219 03:36:11.015587 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.018834 master-0 kubenswrapper[33867]: I0219 03:36:11.018781 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 19 03:36:11.039295 master-0 kubenswrapper[33867]: I0219 03:36:11.039198 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-rmnn4"] Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182240 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-registration-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182361 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-node-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182586 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-pod-volumes-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182644 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-csi-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182701 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-sys\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.182722 master-0 kubenswrapper[33867]: I0219 03:36:11.182744 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-run-udev\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.183296 master-0 kubenswrapper[33867]: I0219 03:36:11.182814 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe74299-6c9b-4699-87d4-309034391fa1-metrics-cert\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.183296 master-0 kubenswrapper[33867]: I0219 03:36:11.182861 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-file-lock-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.183296 master-0 kubenswrapper[33867]: I0219 03:36:11.182895 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-device-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.183296 master-0 kubenswrapper[33867]: I0219 03:36:11.182995 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vfs\" (UniqueName: \"kubernetes.io/projected/fbe74299-6c9b-4699-87d4-309034391fa1-kube-api-access-j4vfs\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.183296 master-0 kubenswrapper[33867]: I0219 03:36:11.183042 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-lvmd-config\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.284881 master-0 kubenswrapper[33867]: I0219 03:36:11.284720 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-device-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.284881 master-0 kubenswrapper[33867]: I0219 03:36:11.284814 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vfs\" (UniqueName: \"kubernetes.io/projected/fbe74299-6c9b-4699-87d4-309034391fa1-kube-api-access-j4vfs\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.284881 master-0 kubenswrapper[33867]: I0219 03:36:11.284854 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-lvmd-config\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.284911 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-registration-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.284951 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-node-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.284994 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-pod-volumes-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.285017 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-csi-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.285048 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-sys\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.285071 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-run-udev\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.285096 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe74299-6c9b-4699-87d4-309034391fa1-metrics-cert\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285236 master-0 kubenswrapper[33867]: I0219 03:36:11.285125 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-file-lock-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285576 master-0 kubenswrapper[33867]: I0219 03:36:11.285498 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-file-lock-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.285576 master-0 kubenswrapper[33867]: I0219 03:36:11.285568 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-device-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286052 master-0 kubenswrapper[33867]: I0219 03:36:11.286025 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-lvmd-config\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286127 master-0 kubenswrapper[33867]: I0219 03:36:11.286092 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-registration-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286289 master-0 kubenswrapper[33867]: I0219 03:36:11.286269 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-node-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286357 master-0 kubenswrapper[33867]: I0219 03:36:11.286333 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-pod-volumes-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286491 master-0 kubenswrapper[33867]: I0219 03:36:11.286471 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-csi-plugin-dir\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286556 master-0 kubenswrapper[33867]: I0219 03:36:11.286514 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-sys\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.286556 master-0 kubenswrapper[33867]: I0219 03:36:11.286549 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/fbe74299-6c9b-4699-87d4-309034391fa1-run-udev\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.290051 master-0 kubenswrapper[33867]: I0219 03:36:11.290013 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe74299-6c9b-4699-87d4-309034391fa1-metrics-cert\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.302524 master-0 kubenswrapper[33867]: I0219 03:36:11.302479 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vfs\" (UniqueName: \"kubernetes.io/projected/fbe74299-6c9b-4699-87d4-309034391fa1-kube-api-access-j4vfs\") pod \"vg-manager-rmnn4\" (UID: \"fbe74299-6c9b-4699-87d4-309034391fa1\") " pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.330847 master-0 kubenswrapper[33867]: I0219 03:36:11.330706 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:11.796333 master-0 kubenswrapper[33867]: W0219 03:36:11.796238 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe74299_6c9b_4699_87d4_309034391fa1.slice/crio-8d5d062846a3803a7ed25c8b32e282180937a2bc454fe394babf87a80c57bcc5 WatchSource:0}: Error finding container 8d5d062846a3803a7ed25c8b32e282180937a2bc454fe394babf87a80c57bcc5: Status 404 returned error can't find the container with id 8d5d062846a3803a7ed25c8b32e282180937a2bc454fe394babf87a80c57bcc5 Feb 19 03:36:11.805291 master-0 kubenswrapper[33867]: I0219 03:36:11.805232 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-rmnn4"] Feb 19 03:36:11.945420 master-0 kubenswrapper[33867]: I0219 03:36:11.945356 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rmnn4" event={"ID":"fbe74299-6c9b-4699-87d4-309034391fa1","Type":"ContainerStarted","Data":"8d5d062846a3803a7ed25c8b32e282180937a2bc454fe394babf87a80c57bcc5"} Feb 19 03:36:12.965403 master-0 kubenswrapper[33867]: I0219 03:36:12.965334 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rmnn4" event={"ID":"fbe74299-6c9b-4699-87d4-309034391fa1","Type":"ContainerStarted","Data":"2b9998b5f44f0d769656f2c81a6f6ce444ba36926aadd2cfc083e3bee042ab15"} Feb 19 03:36:12.982329 master-0 kubenswrapper[33867]: I0219 03:36:12.982172 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-rmnn4" podStartSLOduration=2.982140722 podStartE2EDuration="2.982140722s" podCreationTimestamp="2026-02-19 03:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:36:12.977799599 +0000 UTC m=+778.274470220" watchObservedRunningTime="2026-02-19 03:36:12.982140722 +0000 UTC m=+778.278811343" Feb 19 03:36:13.965645 master-0 kubenswrapper[33867]: I0219 03:36:13.965609 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-rmnn4_fbe74299-6c9b-4699-87d4-309034391fa1/vg-manager/0.log" Feb 19 03:36:13.966223 master-0 kubenswrapper[33867]: I0219 03:36:13.966187 33867 generic.go:334] "Generic (PLEG): container finished" podID="fbe74299-6c9b-4699-87d4-309034391fa1" containerID="2b9998b5f44f0d769656f2c81a6f6ce444ba36926aadd2cfc083e3bee042ab15" exitCode=1 Feb 19 03:36:13.966346 master-0 kubenswrapper[33867]: I0219 03:36:13.966320 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rmnn4" event={"ID":"fbe74299-6c9b-4699-87d4-309034391fa1","Type":"ContainerDied","Data":"2b9998b5f44f0d769656f2c81a6f6ce444ba36926aadd2cfc083e3bee042ab15"} Feb 19 03:36:13.967059 master-0 kubenswrapper[33867]: I0219 03:36:13.967012 33867 scope.go:117] "RemoveContainer" containerID="2b9998b5f44f0d769656f2c81a6f6ce444ba36926aadd2cfc083e3bee042ab15" Feb 19 03:36:14.326962 master-0 kubenswrapper[33867]: I0219 03:36:14.325324 33867 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 19 03:36:14.927584 master-0 kubenswrapper[33867]: I0219 03:36:14.927420 33867 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-19T03:36:14.325352318Z","Handler":null,"Name":""} Feb 19 03:36:14.930641 master-0 kubenswrapper[33867]: I0219 03:36:14.930597 33867 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 19 03:36:14.930764 master-0 kubenswrapper[33867]: I0219 03:36:14.930655 33867 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 19 03:36:14.987634 master-0 kubenswrapper[33867]: I0219 03:36:14.987589 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-rmnn4_fbe74299-6c9b-4699-87d4-309034391fa1/vg-manager/0.log" Feb 19 03:36:14.988138 master-0 kubenswrapper[33867]: I0219 03:36:14.987652 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rmnn4" event={"ID":"fbe74299-6c9b-4699-87d4-309034391fa1","Type":"ContainerStarted","Data":"11136db41c7677ed137e956f0e3129bedd12a1d81e39d7f20f047a7f8e72fca7"} Feb 19 03:36:21.331780 master-0 kubenswrapper[33867]: I0219 03:36:21.331691 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:21.334215 master-0 kubenswrapper[33867]: I0219 03:36:21.334117 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:22.070877 master-0 kubenswrapper[33867]: I0219 03:36:22.070803 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:22.071567 master-0 kubenswrapper[33867]: I0219 03:36:22.071503 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-rmnn4" Feb 19 03:36:22.639474 master-0 kubenswrapper[33867]: I0219 03:36:22.639369 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-69658754cd-pqnxr" podUID="565704da-61cc-4b91-87ab-4d4f50255540" containerName="console" containerID="cri-o://4ac5445e35b4ffd076492d1e6de5b8b93cdc579db6ef79a43ecd83818ad61639" gracePeriod=15 Feb 19 03:36:23.090102 master-0 kubenswrapper[33867]: I0219 03:36:23.090042 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69658754cd-pqnxr_565704da-61cc-4b91-87ab-4d4f50255540/console/0.log" Feb 19 03:36:23.090293 master-0 kubenswrapper[33867]: I0219 03:36:23.090129 33867 generic.go:334] "Generic (PLEG): container finished" podID="565704da-61cc-4b91-87ab-4d4f50255540" containerID="4ac5445e35b4ffd076492d1e6de5b8b93cdc579db6ef79a43ecd83818ad61639" exitCode=2 Feb 19 03:36:23.090293 master-0 kubenswrapper[33867]: I0219 03:36:23.090222 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69658754cd-pqnxr" event={"ID":"565704da-61cc-4b91-87ab-4d4f50255540","Type":"ContainerDied","Data":"4ac5445e35b4ffd076492d1e6de5b8b93cdc579db6ef79a43ecd83818ad61639"} Feb 19 03:36:23.146167 master-0 kubenswrapper[33867]: I0219 03:36:23.146099 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69658754cd-pqnxr_565704da-61cc-4b91-87ab-4d4f50255540/console/0.log" Feb 19 03:36:23.146710 master-0 kubenswrapper[33867]: I0219 03:36:23.146178 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:36:23.259420 master-0 kubenswrapper[33867]: I0219 03:36:23.259317 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86kqc\" (UniqueName: \"kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.259420 master-0 kubenswrapper[33867]: I0219 03:36:23.259419 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.259973 master-0 kubenswrapper[33867]: I0219 03:36:23.259442 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.259973 master-0 kubenswrapper[33867]: I0219 03:36:23.259703 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.259973 master-0 kubenswrapper[33867]: I0219 03:36:23.259766 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.260293 master-0 kubenswrapper[33867]: I0219 03:36:23.260271 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config" (OuterVolumeSpecName: "console-config") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:36:23.260515 master-0 kubenswrapper[33867]: I0219 03:36:23.260388 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.260578 master-0 kubenswrapper[33867]: I0219 03:36:23.260553 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert\") pod \"565704da-61cc-4b91-87ab-4d4f50255540\" (UID: \"565704da-61cc-4b91-87ab-4d4f50255540\") " Feb 19 03:36:23.260673 master-0 kubenswrapper[33867]: I0219 03:36:23.260628 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:36:23.260961 master-0 kubenswrapper[33867]: I0219 03:36:23.260943 33867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-console-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.261027 master-0 kubenswrapper[33867]: I0219 03:36:23.260959 33867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.261084 master-0 kubenswrapper[33867]: I0219 03:36:23.261061 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca" (OuterVolumeSpecName: "service-ca") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:36:23.261459 master-0 kubenswrapper[33867]: I0219 03:36:23.261427 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:36:23.262914 master-0 kubenswrapper[33867]: I0219 03:36:23.262843 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc" (OuterVolumeSpecName: "kube-api-access-86kqc") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "kube-api-access-86kqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:36:23.264775 master-0 kubenswrapper[33867]: I0219 03:36:23.264136 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:36:23.264775 master-0 kubenswrapper[33867]: I0219 03:36:23.264316 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "565704da-61cc-4b91-87ab-4d4f50255540" (UID: "565704da-61cc-4b91-87ab-4d4f50255540"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:36:23.363338 master-0 kubenswrapper[33867]: I0219 03:36:23.363178 33867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.363338 master-0 kubenswrapper[33867]: I0219 03:36:23.363296 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86kqc\" (UniqueName: \"kubernetes.io/projected/565704da-61cc-4b91-87ab-4d4f50255540-kube-api-access-86kqc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.363338 master-0 kubenswrapper[33867]: I0219 03:36:23.363330 33867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.363617 master-0 kubenswrapper[33867]: I0219 03:36:23.363360 33867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/565704da-61cc-4b91-87ab-4d4f50255540-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.363617 master-0 kubenswrapper[33867]: I0219 03:36:23.363388 33867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/565704da-61cc-4b91-87ab-4d4f50255540-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:23.935800 master-0 kubenswrapper[33867]: I0219 03:36:23.935696 33867 patch_prober.go:28] interesting pod/console-69658754cd-pqnxr container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.116:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 03:36:23.936758 master-0 kubenswrapper[33867]: I0219 03:36:23.935805 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-69658754cd-pqnxr" podUID="565704da-61cc-4b91-87ab-4d4f50255540" containerName="console" probeResult="failure" output="Get \"https://10.128.0.116:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 03:36:24.100931 master-0 kubenswrapper[33867]: I0219 03:36:24.100879 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69658754cd-pqnxr_565704da-61cc-4b91-87ab-4d4f50255540/console/0.log" Feb 19 03:36:24.101191 master-0 kubenswrapper[33867]: I0219 03:36:24.101036 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69658754cd-pqnxr" Feb 19 03:36:24.101191 master-0 kubenswrapper[33867]: I0219 03:36:24.101032 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69658754cd-pqnxr" event={"ID":"565704da-61cc-4b91-87ab-4d4f50255540","Type":"ContainerDied","Data":"522f5c4fd94734507d429295076d3ea6a64b995eb0d18d61a67eb7d301f2576a"} Feb 19 03:36:24.101191 master-0 kubenswrapper[33867]: I0219 03:36:24.101137 33867 scope.go:117] "RemoveContainer" containerID="4ac5445e35b4ffd076492d1e6de5b8b93cdc579db6ef79a43ecd83818ad61639" Feb 19 03:36:24.150602 master-0 kubenswrapper[33867]: I0219 03:36:24.146339 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:36:24.160702 master-0 kubenswrapper[33867]: I0219 03:36:24.158130 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-69658754cd-pqnxr"] Feb 19 03:36:24.271673 master-0 kubenswrapper[33867]: I0219 03:36:24.271553 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x5zf7"] Feb 19 03:36:24.272016 master-0 kubenswrapper[33867]: E0219 03:36:24.271988 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565704da-61cc-4b91-87ab-4d4f50255540" containerName="console" Feb 19 03:36:24.272016 master-0 kubenswrapper[33867]: I0219 03:36:24.272014 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="565704da-61cc-4b91-87ab-4d4f50255540" containerName="console" Feb 19 03:36:24.272325 master-0 kubenswrapper[33867]: I0219 03:36:24.272298 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="565704da-61cc-4b91-87ab-4d4f50255540" containerName="console" Feb 19 03:36:24.274351 master-0 kubenswrapper[33867]: I0219 03:36:24.273944 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:24.278534 master-0 kubenswrapper[33867]: I0219 03:36:24.278475 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 03:36:24.278748 master-0 kubenswrapper[33867]: I0219 03:36:24.278567 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 03:36:24.296473 master-0 kubenswrapper[33867]: I0219 03:36:24.296415 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5zf7"] Feb 19 03:36:24.388268 master-0 kubenswrapper[33867]: I0219 03:36:24.388173 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz622\" (UniqueName: \"kubernetes.io/projected/803ccd3e-034b-4152-b2b5-2bf947bd84f0-kube-api-access-xz622\") pod \"openstack-operator-index-x5zf7\" (UID: \"803ccd3e-034b-4152-b2b5-2bf947bd84f0\") " pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:24.489952 master-0 kubenswrapper[33867]: I0219 03:36:24.489893 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz622\" (UniqueName: \"kubernetes.io/projected/803ccd3e-034b-4152-b2b5-2bf947bd84f0-kube-api-access-xz622\") pod \"openstack-operator-index-x5zf7\" (UID: \"803ccd3e-034b-4152-b2b5-2bf947bd84f0\") " pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:24.519716 master-0 kubenswrapper[33867]: I0219 03:36:24.519650 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz622\" (UniqueName: \"kubernetes.io/projected/803ccd3e-034b-4152-b2b5-2bf947bd84f0-kube-api-access-xz622\") pod \"openstack-operator-index-x5zf7\" (UID: \"803ccd3e-034b-4152-b2b5-2bf947bd84f0\") " pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:24.604352 master-0 kubenswrapper[33867]: I0219 03:36:24.603332 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:24.970707 master-0 kubenswrapper[33867]: I0219 03:36:24.970607 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565704da-61cc-4b91-87ab-4d4f50255540" path="/var/lib/kubelet/pods/565704da-61cc-4b91-87ab-4d4f50255540/volumes" Feb 19 03:36:25.047162 master-0 kubenswrapper[33867]: W0219 03:36:25.047085 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803ccd3e_034b_4152_b2b5_2bf947bd84f0.slice/crio-bd16ee644b23eb1ad081b48746c640767bf1b5ad365d4dbcdadec03e46f1f8d5 WatchSource:0}: Error finding container bd16ee644b23eb1ad081b48746c640767bf1b5ad365d4dbcdadec03e46f1f8d5: Status 404 returned error can't find the container with id bd16ee644b23eb1ad081b48746c640767bf1b5ad365d4dbcdadec03e46f1f8d5 Feb 19 03:36:25.047461 master-0 kubenswrapper[33867]: I0219 03:36:25.047369 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5zf7"] Feb 19 03:36:25.110792 master-0 kubenswrapper[33867]: I0219 03:36:25.110729 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5zf7" event={"ID":"803ccd3e-034b-4152-b2b5-2bf947bd84f0","Type":"ContainerStarted","Data":"bd16ee644b23eb1ad081b48746c640767bf1b5ad365d4dbcdadec03e46f1f8d5"} Feb 19 03:36:26.125394 master-0 kubenswrapper[33867]: I0219 03:36:26.125331 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5zf7" event={"ID":"803ccd3e-034b-4152-b2b5-2bf947bd84f0","Type":"ContainerStarted","Data":"5a92da35f413f13725ffa0ffb41b71ffd3ea8d8a99fdd50abbce3082ffbf5652"} Feb 19 03:36:26.154730 master-0 kubenswrapper[33867]: I0219 03:36:26.154603 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x5zf7" podStartSLOduration=1.392834146 podStartE2EDuration="2.154571647s" podCreationTimestamp="2026-02-19 03:36:24 +0000 UTC" firstStartedPulling="2026-02-19 03:36:25.049817441 +0000 UTC m=+790.346488052" lastFinishedPulling="2026-02-19 03:36:25.811554942 +0000 UTC m=+791.108225553" observedRunningTime="2026-02-19 03:36:26.142507406 +0000 UTC m=+791.439178027" watchObservedRunningTime="2026-02-19 03:36:26.154571647 +0000 UTC m=+791.451242278" Feb 19 03:36:34.608300 master-0 kubenswrapper[33867]: I0219 03:36:34.604492 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:34.608300 master-0 kubenswrapper[33867]: I0219 03:36:34.604545 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:34.636867 master-0 kubenswrapper[33867]: I0219 03:36:34.636798 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:35.258350 master-0 kubenswrapper[33867]: I0219 03:36:35.258275 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x5zf7" Feb 19 03:36:42.205104 master-0 kubenswrapper[33867]: I0219 03:36:42.205038 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m"] Feb 19 03:36:42.207933 master-0 kubenswrapper[33867]: I0219 03:36:42.207902 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.219418 master-0 kubenswrapper[33867]: I0219 03:36:42.219346 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m"] Feb 19 03:36:42.348970 master-0 kubenswrapper[33867]: I0219 03:36:42.348897 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thflw\" (UniqueName: \"kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.349231 master-0 kubenswrapper[33867]: I0219 03:36:42.349029 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.349231 master-0 kubenswrapper[33867]: I0219 03:36:42.349136 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.450362 master-0 kubenswrapper[33867]: I0219 03:36:42.450234 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.451009 master-0 kubenswrapper[33867]: I0219 03:36:42.450429 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thflw\" (UniqueName: \"kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.451009 master-0 kubenswrapper[33867]: I0219 03:36:42.450505 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.451230 master-0 kubenswrapper[33867]: I0219 03:36:42.451173 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.451738 master-0 kubenswrapper[33867]: I0219 03:36:42.451663 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.474202 master-0 kubenswrapper[33867]: I0219 03:36:42.474014 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thflw\" (UniqueName: \"kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.529760 master-0 kubenswrapper[33867]: I0219 03:36:42.529686 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:42.991684 master-0 kubenswrapper[33867]: I0219 03:36:42.991593 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m"] Feb 19 03:36:42.999901 master-0 kubenswrapper[33867]: W0219 03:36:42.999844 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac7934e_8e29_421c_bf84_6a24044ec1d2.slice/crio-dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d WatchSource:0}: Error finding container dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d: Status 404 returned error can't find the container with id dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d Feb 19 03:36:43.320369 master-0 kubenswrapper[33867]: I0219 03:36:43.303760 33867 generic.go:334] "Generic (PLEG): container finished" podID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerID="0af70f5b8892dc57b77c584c6a614709b9e7fe62fca582e646c57018863c69b1" exitCode=0 Feb 19 03:36:43.320369 master-0 kubenswrapper[33867]: I0219 03:36:43.303821 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" event={"ID":"8ac7934e-8e29-421c-bf84-6a24044ec1d2","Type":"ContainerDied","Data":"0af70f5b8892dc57b77c584c6a614709b9e7fe62fca582e646c57018863c69b1"} Feb 19 03:36:43.320369 master-0 kubenswrapper[33867]: I0219 03:36:43.303851 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" event={"ID":"8ac7934e-8e29-421c-bf84-6a24044ec1d2","Type":"ContainerStarted","Data":"dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d"} Feb 19 03:36:44.319119 master-0 kubenswrapper[33867]: I0219 03:36:44.318974 33867 generic.go:334] "Generic (PLEG): container finished" podID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerID="2535601e8e3b1e8cbc4f7fa3f0f280f49e86be99374e670a3acf9658c78a435a" exitCode=0 Feb 19 03:36:44.319119 master-0 kubenswrapper[33867]: I0219 03:36:44.319046 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" event={"ID":"8ac7934e-8e29-421c-bf84-6a24044ec1d2","Type":"ContainerDied","Data":"2535601e8e3b1e8cbc4f7fa3f0f280f49e86be99374e670a3acf9658c78a435a"} Feb 19 03:36:45.335521 master-0 kubenswrapper[33867]: I0219 03:36:45.335457 33867 generic.go:334] "Generic (PLEG): container finished" podID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerID="8eb9d7fa363d72737cfa2248a229740112fe3c7061b19477160363f9f66734a9" exitCode=0 Feb 19 03:36:45.336111 master-0 kubenswrapper[33867]: I0219 03:36:45.336072 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" event={"ID":"8ac7934e-8e29-421c-bf84-6a24044ec1d2","Type":"ContainerDied","Data":"8eb9d7fa363d72737cfa2248a229740112fe3c7061b19477160363f9f66734a9"} Feb 19 03:36:46.702535 master-0 kubenswrapper[33867]: I0219 03:36:46.702488 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:46.843006 master-0 kubenswrapper[33867]: I0219 03:36:46.842908 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle\") pod \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " Feb 19 03:36:46.843421 master-0 kubenswrapper[33867]: I0219 03:36:46.843227 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thflw\" (UniqueName: \"kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw\") pod \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " Feb 19 03:36:46.843421 master-0 kubenswrapper[33867]: I0219 03:36:46.843312 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util\") pod \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\" (UID: \"8ac7934e-8e29-421c-bf84-6a24044ec1d2\") " Feb 19 03:36:46.843728 master-0 kubenswrapper[33867]: I0219 03:36:46.843668 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle" (OuterVolumeSpecName: "bundle") pod "8ac7934e-8e29-421c-bf84-6a24044ec1d2" (UID: "8ac7934e-8e29-421c-bf84-6a24044ec1d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:36:46.844518 master-0 kubenswrapper[33867]: I0219 03:36:46.844300 33867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:46.847179 master-0 kubenswrapper[33867]: I0219 03:36:46.847126 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw" (OuterVolumeSpecName: "kube-api-access-thflw") pod "8ac7934e-8e29-421c-bf84-6a24044ec1d2" (UID: "8ac7934e-8e29-421c-bf84-6a24044ec1d2"). InnerVolumeSpecName "kube-api-access-thflw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:36:46.858729 master-0 kubenswrapper[33867]: I0219 03:36:46.858597 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util" (OuterVolumeSpecName: "util") pod "8ac7934e-8e29-421c-bf84-6a24044ec1d2" (UID: "8ac7934e-8e29-421c-bf84-6a24044ec1d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:36:46.947242 master-0 kubenswrapper[33867]: I0219 03:36:46.947046 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thflw\" (UniqueName: \"kubernetes.io/projected/8ac7934e-8e29-421c-bf84-6a24044ec1d2-kube-api-access-thflw\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:46.947242 master-0 kubenswrapper[33867]: I0219 03:36:46.947105 33867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ac7934e-8e29-421c-bf84-6a24044ec1d2-util\") on node \"master-0\" DevicePath \"\"" Feb 19 03:36:47.361987 master-0 kubenswrapper[33867]: I0219 03:36:47.361833 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" event={"ID":"8ac7934e-8e29-421c-bf84-6a24044ec1d2","Type":"ContainerDied","Data":"dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d"} Feb 19 03:36:47.361987 master-0 kubenswrapper[33867]: I0219 03:36:47.361899 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m" Feb 19 03:36:47.362249 master-0 kubenswrapper[33867]: I0219 03:36:47.361903 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf9957839bccc95ae0e95d74e3e82b9fd520255e4d475aad369f6ae7e72ea0d" Feb 19 03:36:52.008511 master-0 kubenswrapper[33867]: I0219 03:36:52.008366 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk"] Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: E0219 03:36:52.008995 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="extract" Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: I0219 03:36:52.009017 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="extract" Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: E0219 03:36:52.009078 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="util" Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: I0219 03:36:52.009085 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="util" Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: E0219 03:36:52.009100 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="pull" Feb 19 03:36:52.009304 master-0 kubenswrapper[33867]: I0219 03:36:52.009107 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="pull" Feb 19 03:36:52.009561 master-0 kubenswrapper[33867]: I0219 03:36:52.009346 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac7934e-8e29-421c-bf84-6a24044ec1d2" containerName="extract" Feb 19 03:36:52.010138 master-0 kubenswrapper[33867]: I0219 03:36:52.010089 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:52.055474 master-0 kubenswrapper[33867]: I0219 03:36:52.055399 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k824m\" (UniqueName: \"kubernetes.io/projected/94c9ea9e-a058-4483-a058-8de6dcaa7e12-kube-api-access-k824m\") pod \"openstack-operator-controller-init-6679bf9b57-l9rmk\" (UID: \"94c9ea9e-a058-4483-a058-8de6dcaa7e12\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:52.072465 master-0 kubenswrapper[33867]: I0219 03:36:52.072326 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk"] Feb 19 03:36:52.158283 master-0 kubenswrapper[33867]: I0219 03:36:52.157640 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k824m\" (UniqueName: \"kubernetes.io/projected/94c9ea9e-a058-4483-a058-8de6dcaa7e12-kube-api-access-k824m\") pod \"openstack-operator-controller-init-6679bf9b57-l9rmk\" (UID: \"94c9ea9e-a058-4483-a058-8de6dcaa7e12\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:52.178567 master-0 kubenswrapper[33867]: I0219 03:36:52.178498 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k824m\" (UniqueName: \"kubernetes.io/projected/94c9ea9e-a058-4483-a058-8de6dcaa7e12-kube-api-access-k824m\") pod \"openstack-operator-controller-init-6679bf9b57-l9rmk\" (UID: \"94c9ea9e-a058-4483-a058-8de6dcaa7e12\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:52.340987 master-0 kubenswrapper[33867]: I0219 03:36:52.340839 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:52.822346 master-0 kubenswrapper[33867]: I0219 03:36:52.820483 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk"] Feb 19 03:36:53.420212 master-0 kubenswrapper[33867]: I0219 03:36:53.420155 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" event={"ID":"94c9ea9e-a058-4483-a058-8de6dcaa7e12","Type":"ContainerStarted","Data":"77925953084bde1265e2b0956e3b63cfc498c8178bef0d048bedd04677773b26"} Feb 19 03:36:58.480395 master-0 kubenswrapper[33867]: I0219 03:36:58.480318 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" event={"ID":"94c9ea9e-a058-4483-a058-8de6dcaa7e12","Type":"ContainerStarted","Data":"d1770da6917a4031d30d56cafa5229ddd845fe14eb828cffcfc52c628f2f0a15"} Feb 19 03:36:58.481009 master-0 kubenswrapper[33867]: I0219 03:36:58.480543 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:36:58.524942 master-0 kubenswrapper[33867]: I0219 03:36:58.524807 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" podStartSLOduration=2.91476977 podStartE2EDuration="7.524779679s" podCreationTimestamp="2026-02-19 03:36:51 +0000 UTC" firstStartedPulling="2026-02-19 03:36:52.830307575 +0000 UTC m=+818.126978186" lastFinishedPulling="2026-02-19 03:36:57.440317484 +0000 UTC m=+822.736988095" observedRunningTime="2026-02-19 03:36:58.51492297 +0000 UTC m=+823.811593591" watchObservedRunningTime="2026-02-19 03:36:58.524779679 +0000 UTC m=+823.821450300" Feb 19 03:37:02.344756 master-0 kubenswrapper[33867]: I0219 03:37:02.344652 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk" Feb 19 03:37:23.020918 master-0 kubenswrapper[33867]: I0219 03:37:23.018343 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69"] Feb 19 03:37:23.020918 master-0 kubenswrapper[33867]: I0219 03:37:23.019948 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:23.028556 master-0 kubenswrapper[33867]: I0219 03:37:23.028476 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk"] Feb 19 03:37:23.032397 master-0 kubenswrapper[33867]: I0219 03:37:23.029810 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:23.041222 master-0 kubenswrapper[33867]: I0219 03:37:23.040998 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m"] Feb 19 03:37:23.042760 master-0 kubenswrapper[33867]: I0219 03:37:23.042723 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:23.089075 master-0 kubenswrapper[33867]: I0219 03:37:23.087953 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69"] Feb 19 03:37:23.089363 master-0 kubenswrapper[33867]: I0219 03:37:23.089162 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjm7\" (UniqueName: \"kubernetes.io/projected/81f513e3-9d43-4ca5-a960-a057b6284bf8-kube-api-access-rtjm7\") pod \"barbican-operator-controller-manager-868647ff47-k6f69\" (UID: \"81f513e3-9d43-4ca5-a960-a057b6284bf8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:23.089363 master-0 kubenswrapper[33867]: I0219 03:37:23.089223 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlsn\" (UniqueName: \"kubernetes.io/projected/af7e58f8-89c5-400f-b73c-5eb73727e8c7-kube-api-access-czlsn\") pod \"cinder-operator-controller-manager-5d946d989d-thsdk\" (UID: \"af7e58f8-89c5-400f-b73c-5eb73727e8c7\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:23.089363 master-0 kubenswrapper[33867]: I0219 03:37:23.089302 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qndl\" (UniqueName: \"kubernetes.io/projected/26bcada5-2616-4d6f-82d6-0659611454af-kube-api-access-5qndl\") pod \"designate-operator-controller-manager-6d8bf5c495-fwz4m\" (UID: \"26bcada5-2616-4d6f-82d6-0659611454af\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:23.115634 master-0 kubenswrapper[33867]: I0219 03:37:23.115170 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk"] Feb 19 03:37:23.129535 master-0 kubenswrapper[33867]: I0219 03:37:23.118332 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m"] Feb 19 03:37:23.135011 master-0 kubenswrapper[33867]: I0219 03:37:23.134948 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2"] Feb 19 03:37:23.147256 master-0 kubenswrapper[33867]: I0219 03:37:23.147132 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:23.153349 master-0 kubenswrapper[33867]: I0219 03:37:23.153303 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2"] Feb 19 03:37:23.173705 master-0 kubenswrapper[33867]: I0219 03:37:23.171294 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v"] Feb 19 03:37:23.173705 master-0 kubenswrapper[33867]: I0219 03:37:23.173625 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:23.191371 master-0 kubenswrapper[33867]: I0219 03:37:23.191308 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjm7\" (UniqueName: \"kubernetes.io/projected/81f513e3-9d43-4ca5-a960-a057b6284bf8-kube-api-access-rtjm7\") pod \"barbican-operator-controller-manager-868647ff47-k6f69\" (UID: \"81f513e3-9d43-4ca5-a960-a057b6284bf8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:23.194822 master-0 kubenswrapper[33867]: I0219 03:37:23.194759 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmf5\" (UniqueName: \"kubernetes.io/projected/4d354ad0-8588-4913-8189-ad94abd86af5-kube-api-access-fvmf5\") pod \"heat-operator-controller-manager-69f49c598c-rpb8v\" (UID: \"4d354ad0-8588-4913-8189-ad94abd86af5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:23.194941 master-0 kubenswrapper[33867]: I0219 03:37:23.194855 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czlsn\" (UniqueName: \"kubernetes.io/projected/af7e58f8-89c5-400f-b73c-5eb73727e8c7-kube-api-access-czlsn\") pod \"cinder-operator-controller-manager-5d946d989d-thsdk\" (UID: \"af7e58f8-89c5-400f-b73c-5eb73727e8c7\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:23.194941 master-0 kubenswrapper[33867]: I0219 03:37:23.194888 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmbh\" (UniqueName: \"kubernetes.io/projected/1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3-kube-api-access-6vmbh\") pod \"glance-operator-controller-manager-77987464f4-tp2t2\" (UID: \"1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:23.195049 master-0 kubenswrapper[33867]: I0219 03:37:23.194979 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qndl\" (UniqueName: \"kubernetes.io/projected/26bcada5-2616-4d6f-82d6-0659611454af-kube-api-access-5qndl\") pod \"designate-operator-controller-manager-6d8bf5c495-fwz4m\" (UID: \"26bcada5-2616-4d6f-82d6-0659611454af\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:23.261305 master-0 kubenswrapper[33867]: I0219 03:37:23.255298 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjm7\" (UniqueName: \"kubernetes.io/projected/81f513e3-9d43-4ca5-a960-a057b6284bf8-kube-api-access-rtjm7\") pod \"barbican-operator-controller-manager-868647ff47-k6f69\" (UID: \"81f513e3-9d43-4ca5-a960-a057b6284bf8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:23.261305 master-0 kubenswrapper[33867]: I0219 03:37:23.257506 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qndl\" (UniqueName: \"kubernetes.io/projected/26bcada5-2616-4d6f-82d6-0659611454af-kube-api-access-5qndl\") pod \"designate-operator-controller-manager-6d8bf5c495-fwz4m\" (UID: \"26bcada5-2616-4d6f-82d6-0659611454af\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:23.261305 master-0 kubenswrapper[33867]: I0219 03:37:23.259883 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czlsn\" (UniqueName: \"kubernetes.io/projected/af7e58f8-89c5-400f-b73c-5eb73727e8c7-kube-api-access-czlsn\") pod \"cinder-operator-controller-manager-5d946d989d-thsdk\" (UID: \"af7e58f8-89c5-400f-b73c-5eb73727e8c7\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:23.295984 master-0 kubenswrapper[33867]: I0219 03:37:23.295862 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v"] Feb 19 03:37:23.297427 master-0 kubenswrapper[33867]: I0219 03:37:23.297399 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmf5\" (UniqueName: \"kubernetes.io/projected/4d354ad0-8588-4913-8189-ad94abd86af5-kube-api-access-fvmf5\") pod \"heat-operator-controller-manager-69f49c598c-rpb8v\" (UID: \"4d354ad0-8588-4913-8189-ad94abd86af5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:23.297731 master-0 kubenswrapper[33867]: I0219 03:37:23.297705 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmbh\" (UniqueName: \"kubernetes.io/projected/1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3-kube-api-access-6vmbh\") pod \"glance-operator-controller-manager-77987464f4-tp2t2\" (UID: \"1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:23.328469 master-0 kubenswrapper[33867]: I0219 03:37:23.328423 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmbh\" (UniqueName: \"kubernetes.io/projected/1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3-kube-api-access-6vmbh\") pod \"glance-operator-controller-manager-77987464f4-tp2t2\" (UID: \"1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:23.340511 master-0 kubenswrapper[33867]: I0219 03:37:23.340445 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h"] Feb 19 03:37:23.343076 master-0 kubenswrapper[33867]: I0219 03:37:23.342339 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:23.372285 master-0 kubenswrapper[33867]: I0219 03:37:23.348175 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmf5\" (UniqueName: \"kubernetes.io/projected/4d354ad0-8588-4913-8189-ad94abd86af5-kube-api-access-fvmf5\") pod \"heat-operator-controller-manager-69f49c598c-rpb8v\" (UID: \"4d354ad0-8588-4913-8189-ad94abd86af5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:23.379169 master-0 kubenswrapper[33867]: I0219 03:37:23.372714 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h"] Feb 19 03:37:23.389217 master-0 kubenswrapper[33867]: I0219 03:37:23.386289 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:23.408400 master-0 kubenswrapper[33867]: I0219 03:37:23.407117 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rmv\" (UniqueName: \"kubernetes.io/projected/1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9-kube-api-access-f8rmv\") pod \"horizon-operator-controller-manager-5b9b8895d5-t8q5h\" (UID: \"1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:23.412342 master-0 kubenswrapper[33867]: I0219 03:37:23.412150 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk"] Feb 19 03:37:23.416856 master-0 kubenswrapper[33867]: I0219 03:37:23.415506 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:23.420150 master-0 kubenswrapper[33867]: I0219 03:37:23.420099 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.429293 master-0 kubenswrapper[33867]: I0219 03:37:23.421454 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk"] Feb 19 03:37:23.429293 master-0 kubenswrapper[33867]: I0219 03:37:23.424313 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 03:37:23.439880 master-0 kubenswrapper[33867]: I0219 03:37:23.439197 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:23.454346 master-0 kubenswrapper[33867]: I0219 03:37:23.447064 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d"] Feb 19 03:37:23.454346 master-0 kubenswrapper[33867]: I0219 03:37:23.448480 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:23.478582 master-0 kubenswrapper[33867]: I0219 03:37:23.478373 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz"] Feb 19 03:37:23.494533 master-0 kubenswrapper[33867]: I0219 03:37:23.494474 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.506845 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d"] Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.509701 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dr4\" (UniqueName: \"kubernetes.io/projected/aa4296cf-041c-4133-a2d9-8a0becd98502-kube-api-access-z7dr4\") pod \"ironic-operator-controller-manager-554564d7fc-trv7d\" (UID: \"aa4296cf-041c-4133-a2d9-8a0becd98502\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.509810 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5j4x\" (UniqueName: \"kubernetes.io/projected/1554c3da-f309-402e-8d61-c12b1ef616bf-kube-api-access-v5j4x\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.509873 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxbrd\" (UniqueName: \"kubernetes.io/projected/5e2af2a9-057f-42b0-aed1-5473728c4a6d-kube-api-access-sxbrd\") pod \"keystone-operator-controller-manager-b4d948c87-8wkzz\" (UID: \"5e2af2a9-057f-42b0-aed1-5473728c4a6d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.509954 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rmv\" (UniqueName: \"kubernetes.io/projected/1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9-kube-api-access-f8rmv\") pod \"horizon-operator-controller-manager-5b9b8895d5-t8q5h\" (UID: \"1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.509998 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.515671 master-0 kubenswrapper[33867]: I0219 03:37:23.513438 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:23.546419 master-0 kubenswrapper[33867]: I0219 03:37:23.535034 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:23.546419 master-0 kubenswrapper[33867]: I0219 03:37:23.542917 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz"] Feb 19 03:37:23.561885 master-0 kubenswrapper[33867]: I0219 03:37:23.556861 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj"] Feb 19 03:37:23.561885 master-0 kubenswrapper[33867]: I0219 03:37:23.559209 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rmv\" (UniqueName: \"kubernetes.io/projected/1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9-kube-api-access-f8rmv\") pod \"horizon-operator-controller-manager-5b9b8895d5-t8q5h\" (UID: \"1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:23.570370 master-0 kubenswrapper[33867]: I0219 03:37:23.566072 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:23.599172 master-0 kubenswrapper[33867]: I0219 03:37:23.599066 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj"] Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.623870 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd"] Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.625373 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.635640 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5j4x\" (UniqueName: \"kubernetes.io/projected/1554c3da-f309-402e-8d61-c12b1ef616bf-kube-api-access-v5j4x\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.635824 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxbrd\" (UniqueName: \"kubernetes.io/projected/5e2af2a9-057f-42b0-aed1-5473728c4a6d-kube-api-access-sxbrd\") pod \"keystone-operator-controller-manager-b4d948c87-8wkzz\" (UID: \"5e2af2a9-057f-42b0-aed1-5473728c4a6d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.639690 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.641317 master-0 kubenswrapper[33867]: I0219 03:37:23.639833 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dr4\" (UniqueName: \"kubernetes.io/projected/aa4296cf-041c-4133-a2d9-8a0becd98502-kube-api-access-z7dr4\") pod \"ironic-operator-controller-manager-554564d7fc-trv7d\" (UID: \"aa4296cf-041c-4133-a2d9-8a0becd98502\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:23.642867 master-0 kubenswrapper[33867]: E0219 03:37:23.642207 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:23.642867 master-0 kubenswrapper[33867]: E0219 03:37:23.642486 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:24.142453146 +0000 UTC m=+849.439123757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:23.664784 master-0 kubenswrapper[33867]: I0219 03:37:23.649271 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd"] Feb 19 03:37:23.664784 master-0 kubenswrapper[33867]: I0219 03:37:23.664609 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs"] Feb 19 03:37:23.666120 master-0 kubenswrapper[33867]: I0219 03:37:23.666051 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:23.669644 master-0 kubenswrapper[33867]: I0219 03:37:23.669559 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dr4\" (UniqueName: \"kubernetes.io/projected/aa4296cf-041c-4133-a2d9-8a0becd98502-kube-api-access-z7dr4\") pod \"ironic-operator-controller-manager-554564d7fc-trv7d\" (UID: \"aa4296cf-041c-4133-a2d9-8a0becd98502\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:23.669766 master-0 kubenswrapper[33867]: I0219 03:37:23.669724 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxbrd\" (UniqueName: \"kubernetes.io/projected/5e2af2a9-057f-42b0-aed1-5473728c4a6d-kube-api-access-sxbrd\") pod \"keystone-operator-controller-manager-b4d948c87-8wkzz\" (UID: \"5e2af2a9-057f-42b0-aed1-5473728c4a6d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:23.669957 master-0 kubenswrapper[33867]: I0219 03:37:23.669910 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5j4x\" (UniqueName: \"kubernetes.io/projected/1554c3da-f309-402e-8d61-c12b1ef616bf-kube-api-access-v5j4x\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:23.735247 master-0 kubenswrapper[33867]: I0219 03:37:23.734246 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs"] Feb 19 03:37:23.743587 master-0 kubenswrapper[33867]: I0219 03:37:23.743525 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxjl\" (UniqueName: \"kubernetes.io/projected/146322c9-d8f1-4aa5-af40-313a3226f9f0-kube-api-access-9mxjl\") pod \"manila-operator-controller-manager-54f6768c69-vs4pj\" (UID: \"146322c9-d8f1-4aa5-af40-313a3226f9f0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:23.743700 master-0 kubenswrapper[33867]: I0219 03:37:23.743686 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrrj\" (UniqueName: \"kubernetes.io/projected/8379917d-eee7-433f-a617-e845e9d59f16-kube-api-access-mlrrj\") pod \"mariadb-operator-controller-manager-6994f66f48-sfhmd\" (UID: \"8379917d-eee7-433f-a617-e845e9d59f16\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:23.749949 master-0 kubenswrapper[33867]: I0219 03:37:23.745957 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm"] Feb 19 03:37:23.762610 master-0 kubenswrapper[33867]: I0219 03:37:23.762533 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm"] Feb 19 03:37:23.762825 master-0 kubenswrapper[33867]: I0219 03:37:23.762730 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:23.814735 master-0 kubenswrapper[33867]: I0219 03:37:23.814639 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw"] Feb 19 03:37:23.816018 master-0 kubenswrapper[33867]: I0219 03:37:23.815961 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:23.819981 master-0 kubenswrapper[33867]: I0219 03:37:23.819886 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:23.848761 master-0 kubenswrapper[33867]: I0219 03:37:23.845691 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw"] Feb 19 03:37:23.856292 master-0 kubenswrapper[33867]: I0219 03:37:23.855666 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxjl\" (UniqueName: \"kubernetes.io/projected/146322c9-d8f1-4aa5-af40-313a3226f9f0-kube-api-access-9mxjl\") pod \"manila-operator-controller-manager-54f6768c69-vs4pj\" (UID: \"146322c9-d8f1-4aa5-af40-313a3226f9f0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:23.856292 master-0 kubenswrapper[33867]: I0219 03:37:23.855948 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjpn\" (UniqueName: \"kubernetes.io/projected/8e845974-687e-4f15-961b-edf71c7dc316-kube-api-access-8xjpn\") pod \"neutron-operator-controller-manager-64ddbf8bb-m22fs\" (UID: \"8e845974-687e-4f15-961b-edf71c7dc316\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:23.856292 master-0 kubenswrapper[33867]: I0219 03:37:23.856012 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrrj\" (UniqueName: \"kubernetes.io/projected/8379917d-eee7-433f-a617-e845e9d59f16-kube-api-access-mlrrj\") pod \"mariadb-operator-controller-manager-6994f66f48-sfhmd\" (UID: \"8379917d-eee7-433f-a617-e845e9d59f16\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:23.869669 master-0 kubenswrapper[33867]: I0219 03:37:23.868851 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:23.870562 master-0 kubenswrapper[33867]: I0219 03:37:23.870517 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx"] Feb 19 03:37:23.890169 master-0 kubenswrapper[33867]: I0219 03:37:23.887287 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrrj\" (UniqueName: \"kubernetes.io/projected/8379917d-eee7-433f-a617-e845e9d59f16-kube-api-access-mlrrj\") pod \"mariadb-operator-controller-manager-6994f66f48-sfhmd\" (UID: \"8379917d-eee7-433f-a617-e845e9d59f16\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:23.890169 master-0 kubenswrapper[33867]: I0219 03:37:23.888052 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxjl\" (UniqueName: \"kubernetes.io/projected/146322c9-d8f1-4aa5-af40-313a3226f9f0-kube-api-access-9mxjl\") pod \"manila-operator-controller-manager-54f6768c69-vs4pj\" (UID: \"146322c9-d8f1-4aa5-af40-313a3226f9f0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:23.915451 master-0 kubenswrapper[33867]: I0219 03:37:23.910946 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k"] Feb 19 03:37:23.915912 master-0 kubenswrapper[33867]: I0219 03:37:23.911193 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:23.920965 master-0 kubenswrapper[33867]: I0219 03:37:23.920918 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 03:37:23.921739 master-0 kubenswrapper[33867]: I0219 03:37:23.921398 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:23.922210 master-0 kubenswrapper[33867]: I0219 03:37:23.922135 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k"] Feb 19 03:37:23.947278 master-0 kubenswrapper[33867]: I0219 03:37:23.944766 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:23.959174 master-0 kubenswrapper[33867]: I0219 03:37:23.958926 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qm9\" (UniqueName: \"kubernetes.io/projected/b3da7145-0056-4bed-8e77-5a257550f8da-kube-api-access-n4qm9\") pod \"octavia-operator-controller-manager-69f8888797-zgxpw\" (UID: \"b3da7145-0056-4bed-8e77-5a257550f8da\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:23.959174 master-0 kubenswrapper[33867]: I0219 03:37:23.959098 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgtg\" (UniqueName: \"kubernetes.io/projected/3412b3eb-21b6-4166-9a78-b7c73f91d708-kube-api-access-fqgtg\") pod \"nova-operator-controller-manager-567668f5cf-cwblm\" (UID: \"3412b3eb-21b6-4166-9a78-b7c73f91d708\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:23.959174 master-0 kubenswrapper[33867]: I0219 03:37:23.959183 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xjpn\" (UniqueName: \"kubernetes.io/projected/8e845974-687e-4f15-961b-edf71c7dc316-kube-api-access-8xjpn\") pod \"neutron-operator-controller-manager-64ddbf8bb-m22fs\" (UID: \"8e845974-687e-4f15-961b-edf71c7dc316\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:23.985265 master-0 kubenswrapper[33867]: I0219 03:37:23.985214 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:23.998728 master-0 kubenswrapper[33867]: I0219 03:37:23.998446 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xjpn\" (UniqueName: \"kubernetes.io/projected/8e845974-687e-4f15-961b-edf71c7dc316-kube-api-access-8xjpn\") pod \"neutron-operator-controller-manager-64ddbf8bb-m22fs\" (UID: \"8e845974-687e-4f15-961b-edf71c7dc316\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:24.011520 master-0 kubenswrapper[33867]: I0219 03:37:24.009424 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx"] Feb 19 03:37:24.018591 master-0 kubenswrapper[33867]: I0219 03:37:24.017574 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:24.039803 master-0 kubenswrapper[33867]: I0219 03:37:24.039063 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:24.064178 master-0 kubenswrapper[33867]: I0219 03:37:24.061737 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mck\" (UniqueName: \"kubernetes.io/projected/e3c70606-b8cd-4216-98e7-d73c7d31b443-kube-api-access-w5mck\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.064178 master-0 kubenswrapper[33867]: I0219 03:37:24.061830 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgtg\" (UniqueName: \"kubernetes.io/projected/3412b3eb-21b6-4166-9a78-b7c73f91d708-kube-api-access-fqgtg\") pod \"nova-operator-controller-manager-567668f5cf-cwblm\" (UID: \"3412b3eb-21b6-4166-9a78-b7c73f91d708\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:24.064178 master-0 kubenswrapper[33867]: I0219 03:37:24.061895 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw9s\" (UniqueName: \"kubernetes.io/projected/3d0c427a-ffc4-4bea-a695-f1c50efb4c79-kube-api-access-4zw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-hv28k\" (UID: \"3d0c427a-ffc4-4bea-a695-f1c50efb4c79\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:24.064178 master-0 kubenswrapper[33867]: I0219 03:37:24.061925 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.064178 master-0 kubenswrapper[33867]: I0219 03:37:24.061971 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qm9\" (UniqueName: \"kubernetes.io/projected/b3da7145-0056-4bed-8e77-5a257550f8da-kube-api-access-n4qm9\") pod \"octavia-operator-controller-manager-69f8888797-zgxpw\" (UID: \"b3da7145-0056-4bed-8e77-5a257550f8da\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:24.097657 master-0 kubenswrapper[33867]: I0219 03:37:24.096756 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgtg\" (UniqueName: \"kubernetes.io/projected/3412b3eb-21b6-4166-9a78-b7c73f91d708-kube-api-access-fqgtg\") pod \"nova-operator-controller-manager-567668f5cf-cwblm\" (UID: \"3412b3eb-21b6-4166-9a78-b7c73f91d708\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:24.099007 master-0 kubenswrapper[33867]: I0219 03:37:24.098946 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qm9\" (UniqueName: \"kubernetes.io/projected/b3da7145-0056-4bed-8e77-5a257550f8da-kube-api-access-n4qm9\") pod \"octavia-operator-controller-manager-69f8888797-zgxpw\" (UID: \"b3da7145-0056-4bed-8e77-5a257550f8da\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:24.118728 master-0 kubenswrapper[33867]: I0219 03:37:24.118650 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8"] Feb 19 03:37:24.120274 master-0 kubenswrapper[33867]: I0219 03:37:24.120228 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:24.139502 master-0 kubenswrapper[33867]: I0219 03:37:24.139427 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hqd26"] Feb 19 03:37:24.141375 master-0 kubenswrapper[33867]: I0219 03:37:24.141352 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:24.147789 master-0 kubenswrapper[33867]: I0219 03:37:24.147755 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8"] Feb 19 03:37:24.163513 master-0 kubenswrapper[33867]: I0219 03:37:24.157067 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hqd26"] Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: I0219 03:37:24.164949 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: I0219 03:37:24.165031 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b66d7\" (UniqueName: \"kubernetes.io/projected/e2e7ed89-284a-4147-bcad-ec2520b9c64c-kube-api-access-b66d7\") pod \"placement-operator-controller-manager-8497b45c89-67lp8\" (UID: \"e2e7ed89-284a-4147-bcad-ec2520b9c64c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: I0219 03:37:24.165074 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mck\" (UniqueName: \"kubernetes.io/projected/e3c70606-b8cd-4216-98e7-d73c7d31b443-kube-api-access-w5mck\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: I0219 03:37:24.165191 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw9s\" (UniqueName: \"kubernetes.io/projected/3d0c427a-ffc4-4bea-a695-f1c50efb4c79-kube-api-access-4zw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-hv28k\" (UID: \"3d0c427a-ffc4-4bea-a695-f1c50efb4c79\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: I0219 03:37:24.165218 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: E0219 03:37:24.165507 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: E0219 03:37:24.165583 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:24.665560298 +0000 UTC m=+849.962230909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: E0219 03:37:24.165918 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:24.167912 master-0 kubenswrapper[33867]: E0219 03:37:24.165955 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:25.165943149 +0000 UTC m=+850.462613760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:24.170493 master-0 kubenswrapper[33867]: I0219 03:37:24.170231 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g"] Feb 19 03:37:24.178839 master-0 kubenswrapper[33867]: I0219 03:37:24.176103 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:24.185437 master-0 kubenswrapper[33867]: I0219 03:37:24.185393 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mck\" (UniqueName: \"kubernetes.io/projected/e3c70606-b8cd-4216-98e7-d73c7d31b443-kube-api-access-w5mck\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.201855 master-0 kubenswrapper[33867]: I0219 03:37:24.200936 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:24.205159 master-0 kubenswrapper[33867]: I0219 03:37:24.205126 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw9s\" (UniqueName: \"kubernetes.io/projected/3d0c427a-ffc4-4bea-a695-f1c50efb4c79-kube-api-access-4zw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-hv28k\" (UID: \"3d0c427a-ffc4-4bea-a695-f1c50efb4c79\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:24.208457 master-0 kubenswrapper[33867]: I0219 03:37:24.208385 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g"] Feb 19 03:37:24.241181 master-0 kubenswrapper[33867]: I0219 03:37:24.241121 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:24.266294 master-0 kubenswrapper[33867]: I0219 03:37:24.264502 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-dxk94"] Feb 19 03:37:24.279245 master-0 kubenswrapper[33867]: I0219 03:37:24.274338 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:24.291978 master-0 kubenswrapper[33867]: I0219 03:37:24.284281 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnnnc\" (UniqueName: \"kubernetes.io/projected/e9dacca4-e34c-4b78-97e3-c12b06b3738b-kube-api-access-cnnnc\") pod \"swift-operator-controller-manager-68f46476f-hqd26\" (UID: \"e9dacca4-e34c-4b78-97e3-c12b06b3738b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:24.291978 master-0 kubenswrapper[33867]: I0219 03:37:24.284640 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b66d7\" (UniqueName: \"kubernetes.io/projected/e2e7ed89-284a-4147-bcad-ec2520b9c64c-kube-api-access-b66d7\") pod \"placement-operator-controller-manager-8497b45c89-67lp8\" (UID: \"e2e7ed89-284a-4147-bcad-ec2520b9c64c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:24.317612 master-0 kubenswrapper[33867]: I0219 03:37:24.317141 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-dxk94"] Feb 19 03:37:24.336285 master-0 kubenswrapper[33867]: I0219 03:37:24.333593 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk"] Feb 19 03:37:24.336285 master-0 kubenswrapper[33867]: I0219 03:37:24.335731 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:24.340051 master-0 kubenswrapper[33867]: I0219 03:37:24.340000 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b66d7\" (UniqueName: \"kubernetes.io/projected/e2e7ed89-284a-4147-bcad-ec2520b9c64c-kube-api-access-b66d7\") pod \"placement-operator-controller-manager-8497b45c89-67lp8\" (UID: \"e2e7ed89-284a-4147-bcad-ec2520b9c64c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:24.353054 master-0 kubenswrapper[33867]: I0219 03:37:24.352990 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:24.358945 master-0 kubenswrapper[33867]: I0219 03:37:24.358719 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk"] Feb 19 03:37:24.387688 master-0 kubenswrapper[33867]: I0219 03:37:24.387597 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lnw\" (UniqueName: \"kubernetes.io/projected/bd663e7b-0774-48b5-bf36-9b28f553c2f8-kube-api-access-52lnw\") pod \"test-operator-controller-manager-7866795846-dxk94\" (UID: \"bd663e7b-0774-48b5-bf36-9b28f553c2f8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:24.387688 master-0 kubenswrapper[33867]: I0219 03:37:24.387682 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85h5k\" (UniqueName: \"kubernetes.io/projected/6776ed22-9e69-4556-b092-fc78542efe4a-kube-api-access-85h5k\") pod \"watcher-operator-controller-manager-5db88f68c-k82hk\" (UID: \"6776ed22-9e69-4556-b092-fc78542efe4a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:24.387922 master-0 kubenswrapper[33867]: I0219 03:37:24.387713 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnnnc\" (UniqueName: \"kubernetes.io/projected/e9dacca4-e34c-4b78-97e3-c12b06b3738b-kube-api-access-cnnnc\") pod \"swift-operator-controller-manager-68f46476f-hqd26\" (UID: \"e9dacca4-e34c-4b78-97e3-c12b06b3738b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:24.387922 master-0 kubenswrapper[33867]: I0219 03:37:24.387799 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqvx\" (UniqueName: \"kubernetes.io/projected/116294f4-67e6-4af1-a23f-29012eeb2090-kube-api-access-xtqvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-bzt8g\" (UID: \"116294f4-67e6-4af1-a23f-29012eeb2090\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:24.426018 master-0 kubenswrapper[33867]: I0219 03:37:24.425900 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnnnc\" (UniqueName: \"kubernetes.io/projected/e9dacca4-e34c-4b78-97e3-c12b06b3738b-kube-api-access-cnnnc\") pod \"swift-operator-controller-manager-68f46476f-hqd26\" (UID: \"e9dacca4-e34c-4b78-97e3-c12b06b3738b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:24.465116 master-0 kubenswrapper[33867]: I0219 03:37:24.464627 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls"] Feb 19 03:37:24.471341 master-0 kubenswrapper[33867]: I0219 03:37:24.469246 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.471839 master-0 kubenswrapper[33867]: I0219 03:37:24.471792 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 03:37:24.473011 master-0 kubenswrapper[33867]: I0219 03:37:24.472806 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494143 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52lnw\" (UniqueName: \"kubernetes.io/projected/bd663e7b-0774-48b5-bf36-9b28f553c2f8-kube-api-access-52lnw\") pod \"test-operator-controller-manager-7866795846-dxk94\" (UID: \"bd663e7b-0774-48b5-bf36-9b28f553c2f8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85h5k\" (UniqueName: \"kubernetes.io/projected/6776ed22-9e69-4556-b092-fc78542efe4a-kube-api-access-85h5k\") pod \"watcher-operator-controller-manager-5db88f68c-k82hk\" (UID: \"6776ed22-9e69-4556-b092-fc78542efe4a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494366 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494392 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494453 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fd42\" (UniqueName: \"kubernetes.io/projected/a046d5fd-383b-4769-9912-a8ed83bf66a7-kube-api-access-4fd42\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.494694 master-0 kubenswrapper[33867]: I0219 03:37:24.494477 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtqvx\" (UniqueName: \"kubernetes.io/projected/116294f4-67e6-4af1-a23f-29012eeb2090-kube-api-access-xtqvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-bzt8g\" (UID: \"116294f4-67e6-4af1-a23f-29012eeb2090\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:24.509522 master-0 kubenswrapper[33867]: I0219 03:37:24.507517 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:24.509522 master-0 kubenswrapper[33867]: I0219 03:37:24.507978 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls"] Feb 19 03:37:24.529075 master-0 kubenswrapper[33867]: I0219 03:37:24.528811 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n"] Feb 19 03:37:24.531027 master-0 kubenswrapper[33867]: I0219 03:37:24.530614 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" Feb 19 03:37:24.541007 master-0 kubenswrapper[33867]: I0219 03:37:24.534724 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtqvx\" (UniqueName: \"kubernetes.io/projected/116294f4-67e6-4af1-a23f-29012eeb2090-kube-api-access-xtqvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-bzt8g\" (UID: \"116294f4-67e6-4af1-a23f-29012eeb2090\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:24.542109 master-0 kubenswrapper[33867]: I0219 03:37:24.541958 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85h5k\" (UniqueName: \"kubernetes.io/projected/6776ed22-9e69-4556-b092-fc78542efe4a-kube-api-access-85h5k\") pod \"watcher-operator-controller-manager-5db88f68c-k82hk\" (UID: \"6776ed22-9e69-4556-b092-fc78542efe4a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:24.542109 master-0 kubenswrapper[33867]: I0219 03:37:24.542050 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n"] Feb 19 03:37:24.549891 master-0 kubenswrapper[33867]: I0219 03:37:24.546236 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:24.549891 master-0 kubenswrapper[33867]: I0219 03:37:24.546982 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52lnw\" (UniqueName: \"kubernetes.io/projected/bd663e7b-0774-48b5-bf36-9b28f553c2f8-kube-api-access-52lnw\") pod \"test-operator-controller-manager-7866795846-dxk94\" (UID: \"bd663e7b-0774-48b5-bf36-9b28f553c2f8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:24.551664 master-0 kubenswrapper[33867]: I0219 03:37:24.551628 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:24.587089 master-0 kubenswrapper[33867]: I0219 03:37:24.577019 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:24.613426 master-0 kubenswrapper[33867]: I0219 03:37:24.612381 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fd42\" (UniqueName: \"kubernetes.io/projected/a046d5fd-383b-4769-9912-a8ed83bf66a7-kube-api-access-4fd42\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.613426 master-0 kubenswrapper[33867]: I0219 03:37:24.612826 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.613426 master-0 kubenswrapper[33867]: I0219 03:37:24.612859 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.613426 master-0 kubenswrapper[33867]: I0219 03:37:24.613179 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5kl\" (UniqueName: \"kubernetes.io/projected/eb5820e0-2241-4fc0-a7ae-e2eb51b08653-kube-api-access-ww5kl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t465n\" (UID: \"eb5820e0-2241-4fc0-a7ae-e2eb51b08653\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" Feb 19 03:37:24.616522 master-0 kubenswrapper[33867]: E0219 03:37:24.616479 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:24.616614 master-0 kubenswrapper[33867]: E0219 03:37:24.616565 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:25.116529697 +0000 UTC m=+850.413200308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:24.620001 master-0 kubenswrapper[33867]: E0219 03:37:24.619958 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:24.620083 master-0 kubenswrapper[33867]: E0219 03:37:24.620037 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:25.119999815 +0000 UTC m=+850.416670426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:24.632450 master-0 kubenswrapper[33867]: I0219 03:37:24.632393 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fd42\" (UniqueName: \"kubernetes.io/projected/a046d5fd-383b-4769-9912-a8ed83bf66a7-kube-api-access-4fd42\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:24.665913 master-0 kubenswrapper[33867]: I0219 03:37:24.665869 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69"] Feb 19 03:37:24.674118 master-0 kubenswrapper[33867]: I0219 03:37:24.674074 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:24.687413 master-0 kubenswrapper[33867]: W0219 03:37:24.682109 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf7e58f8_89c5_400f_b73c_5eb73727e8c7.slice/crio-e6047df293ed627054933eacd32eebb3c5e928538ef681f6bffdb5e648ec5460 WatchSource:0}: Error finding container e6047df293ed627054933eacd32eebb3c5e928538ef681f6bffdb5e648ec5460: Status 404 returned error can't find the container with id e6047df293ed627054933eacd32eebb3c5e928538ef681f6bffdb5e648ec5460 Feb 19 03:37:24.714157 master-0 kubenswrapper[33867]: I0219 03:37:24.714085 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:24.714490 master-0 kubenswrapper[33867]: E0219 03:37:24.714437 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:24.714688 master-0 kubenswrapper[33867]: E0219 03:37:24.714661 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:25.714625094 +0000 UTC m=+851.011295705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:24.714879 master-0 kubenswrapper[33867]: I0219 03:37:24.714840 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww5kl\" (UniqueName: \"kubernetes.io/projected/eb5820e0-2241-4fc0-a7ae-e2eb51b08653-kube-api-access-ww5kl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t465n\" (UID: \"eb5820e0-2241-4fc0-a7ae-e2eb51b08653\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" Feb 19 03:37:24.721595 master-0 kubenswrapper[33867]: I0219 03:37:24.721506 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk"] Feb 19 03:37:24.740378 master-0 kubenswrapper[33867]: I0219 03:37:24.740295 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww5kl\" (UniqueName: \"kubernetes.io/projected/eb5820e0-2241-4fc0-a7ae-e2eb51b08653-kube-api-access-ww5kl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t465n\" (UID: \"eb5820e0-2241-4fc0-a7ae-e2eb51b08653\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" Feb 19 03:37:24.795747 master-0 kubenswrapper[33867]: I0219 03:37:24.788533 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m"] Feb 19 03:37:24.821533 master-0 kubenswrapper[33867]: I0219 03:37:24.821480 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" event={"ID":"af7e58f8-89c5-400f-b73c-5eb73727e8c7","Type":"ContainerStarted","Data":"e6047df293ed627054933eacd32eebb3c5e928538ef681f6bffdb5e648ec5460"} Feb 19 03:37:24.832398 master-0 kubenswrapper[33867]: I0219 03:37:24.830494 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" event={"ID":"81f513e3-9d43-4ca5-a960-a057b6284bf8","Type":"ContainerStarted","Data":"294a9b7f603698393b4a2f05513168d1240182d8cf7e08ec7951fc72907f4f8a"} Feb 19 03:37:25.013837 master-0 kubenswrapper[33867]: I0219 03:37:25.013756 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" Feb 19 03:37:25.076088 master-0 kubenswrapper[33867]: I0219 03:37:25.076021 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d"] Feb 19 03:37:25.109795 master-0 kubenswrapper[33867]: I0219 03:37:25.109700 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2"] Feb 19 03:37:25.116537 master-0 kubenswrapper[33867]: W0219 03:37:25.116438 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dbd1105_8bb5_4010_9ec9_58c2dd1f35e9.slice/crio-802f3f26d2e2cbdc894b77ccbda2fb915b0035302bc04a11b4dcc549d55b56b5 WatchSource:0}: Error finding container 802f3f26d2e2cbdc894b77ccbda2fb915b0035302bc04a11b4dcc549d55b56b5: Status 404 returned error can't find the container with id 802f3f26d2e2cbdc894b77ccbda2fb915b0035302bc04a11b4dcc549d55b56b5 Feb 19 03:37:25.118911 master-0 kubenswrapper[33867]: I0219 03:37:25.118816 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h"] Feb 19 03:37:25.134830 master-0 kubenswrapper[33867]: I0219 03:37:25.134678 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v"] Feb 19 03:37:25.167387 master-0 kubenswrapper[33867]: I0219 03:37:25.167295 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:25.167387 master-0 kubenswrapper[33867]: I0219 03:37:25.167383 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: I0219 03:37:25.167432 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167492 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167541 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167554 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:26.167535478 +0000 UTC m=+851.464206089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167677 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:26.167653641 +0000 UTC m=+851.464324252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167581 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:25.167738 master-0 kubenswrapper[33867]: E0219 03:37:25.167705 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:27.167699783 +0000 UTC m=+852.464370384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:25.506605 master-0 kubenswrapper[33867]: I0219 03:37:25.506547 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz"] Feb 19 03:37:25.529503 master-0 kubenswrapper[33867]: I0219 03:37:25.529396 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs"] Feb 19 03:37:25.545445 master-0 kubenswrapper[33867]: I0219 03:37:25.541755 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd"] Feb 19 03:37:25.564450 master-0 kubenswrapper[33867]: I0219 03:37:25.561592 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj"] Feb 19 03:37:25.795512 master-0 kubenswrapper[33867]: I0219 03:37:25.794502 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:25.795512 master-0 kubenswrapper[33867]: E0219 03:37:25.795188 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:25.795512 master-0 kubenswrapper[33867]: E0219 03:37:25.795299 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:27.795274272 +0000 UTC m=+853.091944883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:25.873103 master-0 kubenswrapper[33867]: I0219 03:37:25.873021 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" event={"ID":"8379917d-eee7-433f-a617-e845e9d59f16","Type":"ContainerStarted","Data":"55f3157a37fd98a9f1d7c104980f8264beeefd0fcae33880362aab411ec4e379"} Feb 19 03:37:25.885000 master-0 kubenswrapper[33867]: I0219 03:37:25.875896 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" event={"ID":"1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9","Type":"ContainerStarted","Data":"802f3f26d2e2cbdc894b77ccbda2fb915b0035302bc04a11b4dcc549d55b56b5"} Feb 19 03:37:25.901936 master-0 kubenswrapper[33867]: I0219 03:37:25.901849 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" event={"ID":"26bcada5-2616-4d6f-82d6-0659611454af","Type":"ContainerStarted","Data":"be06a0893708c08602531779e07b5811bca27e3a79bc0d892c27c4aaae4e248d"} Feb 19 03:37:25.903347 master-0 kubenswrapper[33867]: I0219 03:37:25.903308 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" event={"ID":"146322c9-d8f1-4aa5-af40-313a3226f9f0","Type":"ContainerStarted","Data":"c030d6ec54ac439c6a4eba4ffef09b94a8faaa17d3b4aab82db9f578e76666ed"} Feb 19 03:37:25.905375 master-0 kubenswrapper[33867]: I0219 03:37:25.905334 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" event={"ID":"5e2af2a9-057f-42b0-aed1-5473728c4a6d","Type":"ContainerStarted","Data":"3c05ec13d61b6aea87e214ace1b8d1b10c322944bdf4ec2bc3ea292bcd0ea368"} Feb 19 03:37:25.906443 master-0 kubenswrapper[33867]: I0219 03:37:25.906409 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" event={"ID":"aa4296cf-041c-4133-a2d9-8a0becd98502","Type":"ContainerStarted","Data":"2a3e7fd785fd712cf5678ea378b3db4b862abf6751a6bbf9e00b2a14c391fbaa"} Feb 19 03:37:25.907921 master-0 kubenswrapper[33867]: I0219 03:37:25.907890 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" event={"ID":"8e845974-687e-4f15-961b-edf71c7dc316","Type":"ContainerStarted","Data":"739e91282e572d55f23c2713769c9f6e7ef0a2bb8f5aad400b84bdd71522b91f"} Feb 19 03:37:25.909344 master-0 kubenswrapper[33867]: I0219 03:37:25.909307 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" event={"ID":"4d354ad0-8588-4913-8189-ad94abd86af5","Type":"ContainerStarted","Data":"657efee29d2da0044564a656a06222bb274b7656d97e6963f1ab4e80f7a13983"} Feb 19 03:37:25.910748 master-0 kubenswrapper[33867]: I0219 03:37:25.910713 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" event={"ID":"1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3","Type":"ContainerStarted","Data":"fc664f628702bbae717c67fdc5777b9fbd78ee35a29ad99dee73d6bd398bf55b"} Feb 19 03:37:26.094761 master-0 kubenswrapper[33867]: I0219 03:37:26.094024 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw"] Feb 19 03:37:26.097659 master-0 kubenswrapper[33867]: W0219 03:37:26.095365 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116294f4_67e6_4af1_a23f_29012eeb2090.slice/crio-3f8f04585c845d10310b4164f10341919c8d4829c8b13c24abfd78401823d47c WatchSource:0}: Error finding container 3f8f04585c845d10310b4164f10341919c8d4829c8b13c24abfd78401823d47c: Status 404 returned error can't find the container with id 3f8f04585c845d10310b4164f10341919c8d4829c8b13c24abfd78401823d47c Feb 19 03:37:26.101424 master-0 kubenswrapper[33867]: W0219 03:37:26.101327 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3da7145_0056_4bed_8e77_5a257550f8da.slice/crio-8dca68754b7321858c99744e92ad6d8801036bae04e314f41065636f2f08bc6f WatchSource:0}: Error finding container 8dca68754b7321858c99744e92ad6d8801036bae04e314f41065636f2f08bc6f: Status 404 returned error can't find the container with id 8dca68754b7321858c99744e92ad6d8801036bae04e314f41065636f2f08bc6f Feb 19 03:37:26.111363 master-0 kubenswrapper[33867]: W0219 03:37:26.111153 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9dacca4_e34c_4b78_97e3_c12b06b3738b.slice/crio-6bc35296c263022af637acf70cf218b536930e90eb9d32314a160d70ec047951 WatchSource:0}: Error finding container 6bc35296c263022af637acf70cf218b536930e90eb9d32314a160d70ec047951: Status 404 returned error can't find the container with id 6bc35296c263022af637acf70cf218b536930e90eb9d32314a160d70ec047951 Feb 19 03:37:26.112237 master-0 kubenswrapper[33867]: W0219 03:37:26.112102 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d0c427a_ffc4_4bea_a695_f1c50efb4c79.slice/crio-16fa1e9662859ae3a6072ac930ed49422585ae1679d3376857e261cee451bba1 WatchSource:0}: Error finding container 16fa1e9662859ae3a6072ac930ed49422585ae1679d3376857e261cee451bba1: Status 404 returned error can't find the container with id 16fa1e9662859ae3a6072ac930ed49422585ae1679d3376857e261cee451bba1 Feb 19 03:37:26.126795 master-0 kubenswrapper[33867]: I0219 03:37:26.126333 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k"] Feb 19 03:37:26.136551 master-0 kubenswrapper[33867]: I0219 03:37:26.136338 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm"] Feb 19 03:37:26.149277 master-0 kubenswrapper[33867]: I0219 03:37:26.148476 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g"] Feb 19 03:37:26.165522 master-0 kubenswrapper[33867]: I0219 03:37:26.161325 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8"] Feb 19 03:37:26.200474 master-0 kubenswrapper[33867]: I0219 03:37:26.193872 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hqd26"] Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: I0219 03:37:26.213383 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: I0219 03:37:26.213509 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: E0219 03:37:26.213933 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: E0219 03:37:26.214070 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:28.214019649 +0000 UTC m=+853.510690260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: E0219 03:37:26.214178 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:26.217401 master-0 kubenswrapper[33867]: E0219 03:37:26.214237 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:28.214224905 +0000 UTC m=+853.510895516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:26.445778 master-0 kubenswrapper[33867]: I0219 03:37:26.443309 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk"] Feb 19 03:37:26.473349 master-0 kubenswrapper[33867]: W0219 03:37:26.473229 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd663e7b_0774_48b5_bf36_9b28f553c2f8.slice/crio-26fe282748d84daf6780f6cc8ae04333d9a31df31df9ad3173a99dbf3944e90f WatchSource:0}: Error finding container 26fe282748d84daf6780f6cc8ae04333d9a31df31df9ad3173a99dbf3944e90f: Status 404 returned error can't find the container with id 26fe282748d84daf6780f6cc8ae04333d9a31df31df9ad3173a99dbf3944e90f Feb 19 03:37:26.474108 master-0 kubenswrapper[33867]: I0219 03:37:26.474050 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n"] Feb 19 03:37:26.529242 master-0 kubenswrapper[33867]: I0219 03:37:26.529151 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-dxk94"] Feb 19 03:37:26.945031 master-0 kubenswrapper[33867]: I0219 03:37:26.944863 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" event={"ID":"3412b3eb-21b6-4166-9a78-b7c73f91d708","Type":"ContainerStarted","Data":"ec3f764b0dcd347bdd8322d03735ea18f6e4206fdfc20351e7bb3d6463ea9360"} Feb 19 03:37:26.950448 master-0 kubenswrapper[33867]: I0219 03:37:26.950300 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" event={"ID":"6776ed22-9e69-4556-b092-fc78542efe4a","Type":"ContainerStarted","Data":"b02e691573b2796f8e98589b7c3a34e68050406c194e549a7987c5e5d12f83eb"} Feb 19 03:37:27.015338 master-0 kubenswrapper[33867]: I0219 03:37:27.015241 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" event={"ID":"bd663e7b-0774-48b5-bf36-9b28f553c2f8","Type":"ContainerStarted","Data":"26fe282748d84daf6780f6cc8ae04333d9a31df31df9ad3173a99dbf3944e90f"} Feb 19 03:37:27.015338 master-0 kubenswrapper[33867]: I0219 03:37:27.015332 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" event={"ID":"e2e7ed89-284a-4147-bcad-ec2520b9c64c","Type":"ContainerStarted","Data":"b29e8a8b2739aad7e05aaa9006d06439b5e31777b4adb7b894a3035958b93b0d"} Feb 19 03:37:27.015338 master-0 kubenswrapper[33867]: I0219 03:37:27.015346 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" event={"ID":"e9dacca4-e34c-4b78-97e3-c12b06b3738b","Type":"ContainerStarted","Data":"6bc35296c263022af637acf70cf218b536930e90eb9d32314a160d70ec047951"} Feb 19 03:37:27.015338 master-0 kubenswrapper[33867]: I0219 03:37:27.015358 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" event={"ID":"3d0c427a-ffc4-4bea-a695-f1c50efb4c79","Type":"ContainerStarted","Data":"16fa1e9662859ae3a6072ac930ed49422585ae1679d3376857e261cee451bba1"} Feb 19 03:37:27.015831 master-0 kubenswrapper[33867]: I0219 03:37:27.015375 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" event={"ID":"eb5820e0-2241-4fc0-a7ae-e2eb51b08653","Type":"ContainerStarted","Data":"76f7242e23d7567f916735084587a5c0ec637d6722d60f54958c65792aaeb181"} Feb 19 03:37:27.015831 master-0 kubenswrapper[33867]: I0219 03:37:27.015387 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" event={"ID":"b3da7145-0056-4bed-8e77-5a257550f8da","Type":"ContainerStarted","Data":"8dca68754b7321858c99744e92ad6d8801036bae04e314f41065636f2f08bc6f"} Feb 19 03:37:27.016774 master-0 kubenswrapper[33867]: I0219 03:37:27.016060 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" event={"ID":"116294f4-67e6-4af1-a23f-29012eeb2090","Type":"ContainerStarted","Data":"3f8f04585c845d10310b4164f10341919c8d4829c8b13c24abfd78401823d47c"} Feb 19 03:37:27.480201 master-0 kubenswrapper[33867]: I0219 03:37:27.238157 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:27.480201 master-0 kubenswrapper[33867]: E0219 03:37:27.238845 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:27.480201 master-0 kubenswrapper[33867]: E0219 03:37:27.238916 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:31.238895217 +0000 UTC m=+856.535565828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:27.852094 master-0 kubenswrapper[33867]: I0219 03:37:27.851959 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:27.852309 master-0 kubenswrapper[33867]: E0219 03:37:27.852238 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:27.852425 master-0 kubenswrapper[33867]: E0219 03:37:27.852405 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:31.852373067 +0000 UTC m=+857.149043678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:28.280559 master-0 kubenswrapper[33867]: I0219 03:37:28.280384 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:28.280797 master-0 kubenswrapper[33867]: I0219 03:37:28.280579 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:28.281160 master-0 kubenswrapper[33867]: E0219 03:37:28.280977 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:28.282321 master-0 kubenswrapper[33867]: E0219 03:37:28.281575 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:32.281529748 +0000 UTC m=+857.578200359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:28.282401 master-0 kubenswrapper[33867]: E0219 03:37:28.282367 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:28.282442 master-0 kubenswrapper[33867]: E0219 03:37:28.282415 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:32.282403883 +0000 UTC m=+857.579074494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:31.290354 master-0 kubenswrapper[33867]: I0219 03:37:31.290287 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:31.291199 master-0 kubenswrapper[33867]: E0219 03:37:31.290546 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:31.291199 master-0 kubenswrapper[33867]: E0219 03:37:31.290592 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:39.290576637 +0000 UTC m=+864.587247248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:31.904285 master-0 kubenswrapper[33867]: I0219 03:37:31.904184 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:31.904695 master-0 kubenswrapper[33867]: E0219 03:37:31.904383 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:31.904695 master-0 kubenswrapper[33867]: E0219 03:37:31.904467 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:39.904447538 +0000 UTC m=+865.201118159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:32.313631 master-0 kubenswrapper[33867]: I0219 03:37:32.313481 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:32.313631 master-0 kubenswrapper[33867]: I0219 03:37:32.313533 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:32.314290 master-0 kubenswrapper[33867]: E0219 03:37:32.313826 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:32.314290 master-0 kubenswrapper[33867]: E0219 03:37:32.313849 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:32.314290 master-0 kubenswrapper[33867]: E0219 03:37:32.313982 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:40.313943143 +0000 UTC m=+865.610613924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:32.314290 master-0 kubenswrapper[33867]: E0219 03:37:32.314063 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:40.314038275 +0000 UTC m=+865.610708896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:39.301439 master-0 kubenswrapper[33867]: I0219 03:37:39.301168 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:39.302128 master-0 kubenswrapper[33867]: E0219 03:37:39.301391 33867 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:39.302128 master-0 kubenswrapper[33867]: E0219 03:37:39.301559 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert podName:1554c3da-f309-402e-8d61-c12b1ef616bf nodeName:}" failed. No retries permitted until 2026-02-19 03:37:55.301539461 +0000 UTC m=+880.598210072 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert") pod "infra-operator-controller-manager-5f879c76b6-nzsnk" (UID: "1554c3da-f309-402e-8d61-c12b1ef616bf") : secret "infra-operator-webhook-server-cert" not found Feb 19 03:37:39.913619 master-0 kubenswrapper[33867]: I0219 03:37:39.913528 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:39.913934 master-0 kubenswrapper[33867]: E0219 03:37:39.913759 33867 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:39.913934 master-0 kubenswrapper[33867]: E0219 03:37:39.913878 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert podName:e3c70606-b8cd-4216-98e7-d73c7d31b443 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:55.913848658 +0000 UTC m=+881.210519269 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" (UID: "e3c70606-b8cd-4216-98e7-d73c7d31b443") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 03:37:40.320933 master-0 kubenswrapper[33867]: I0219 03:37:40.320800 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:40.320933 master-0 kubenswrapper[33867]: I0219 03:37:40.320875 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:40.321483 master-0 kubenswrapper[33867]: E0219 03:37:40.321014 33867 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 03:37:40.321483 master-0 kubenswrapper[33867]: E0219 03:37:40.321105 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:56.321078469 +0000 UTC m=+881.617749130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "webhook-server-cert" not found Feb 19 03:37:40.321483 master-0 kubenswrapper[33867]: E0219 03:37:40.321134 33867 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 03:37:40.321483 master-0 kubenswrapper[33867]: E0219 03:37:40.321208 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs podName:a046d5fd-383b-4769-9912-a8ed83bf66a7 nodeName:}" failed. No retries permitted until 2026-02-19 03:37:56.321188242 +0000 UTC m=+881.617858893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-kgvls" (UID: "a046d5fd-383b-4769-9912-a8ed83bf66a7") : secret "metrics-server-cert" not found Feb 19 03:37:46.473063 master-0 kubenswrapper[33867]: I0219 03:37:46.472740 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" event={"ID":"81f513e3-9d43-4ca5-a960-a057b6284bf8","Type":"ContainerStarted","Data":"d60fa9ee274dd5bc2a2ec9d6d245327356bb18d1d46174e9bfce1a3d1f8f6bda"} Feb 19 03:37:46.476563 master-0 kubenswrapper[33867]: I0219 03:37:46.476493 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" event={"ID":"6776ed22-9e69-4556-b092-fc78542efe4a","Type":"ContainerStarted","Data":"71196c03dec726adc8c12a468611b78f64addd1a07c2463d9ffeb53f0e205173"} Feb 19 03:37:46.481427 master-0 kubenswrapper[33867]: I0219 03:37:46.481359 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" event={"ID":"b3da7145-0056-4bed-8e77-5a257550f8da","Type":"ContainerStarted","Data":"ec90c492151b2ee23053314386a6d921a5291ee4850a978ab71f0d40db8f36e7"} Feb 19 03:37:46.483687 master-0 kubenswrapper[33867]: I0219 03:37:46.482663 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:46.678158 master-0 kubenswrapper[33867]: I0219 03:37:46.678040 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" podStartSLOduration=3.8630443960000003 podStartE2EDuration="23.67801659s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.117660721 +0000 UTC m=+851.414331332" lastFinishedPulling="2026-02-19 03:37:45.932632915 +0000 UTC m=+871.229303526" observedRunningTime="2026-02-19 03:37:46.659891397 +0000 UTC m=+871.956561998" watchObservedRunningTime="2026-02-19 03:37:46.67801659 +0000 UTC m=+871.974687201" Feb 19 03:37:47.492366 master-0 kubenswrapper[33867]: I0219 03:37:47.492282 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" event={"ID":"4d354ad0-8588-4913-8189-ad94abd86af5","Type":"ContainerStarted","Data":"9383997b737fd0e73d055b66e62d9a27f0dbc0fa96b10ca7529495c8d9aa6373"} Feb 19 03:37:47.493050 master-0 kubenswrapper[33867]: I0219 03:37:47.492477 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:47.494251 master-0 kubenswrapper[33867]: I0219 03:37:47.494194 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" event={"ID":"8379917d-eee7-433f-a617-e845e9d59f16","Type":"ContainerStarted","Data":"f4fe4b4ce8492a06e79a09c49c418cd7ff88fb393d509494b9ceb7f0d8cb0148"} Feb 19 03:37:47.494367 master-0 kubenswrapper[33867]: I0219 03:37:47.494325 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:47.496393 master-0 kubenswrapper[33867]: I0219 03:37:47.496071 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" event={"ID":"aa4296cf-041c-4133-a2d9-8a0becd98502","Type":"ContainerStarted","Data":"9a445a238c0b4c35e24573415478474c5c061561f84f358a64356777c3ea0b35"} Feb 19 03:37:47.496479 master-0 kubenswrapper[33867]: I0219 03:37:47.496402 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:47.497864 master-0 kubenswrapper[33867]: I0219 03:37:47.497799 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" event={"ID":"af7e58f8-89c5-400f-b73c-5eb73727e8c7","Type":"ContainerStarted","Data":"40a28d643ca3eaa1dbdee6ca5f6b5847747316d5e20e298a783d6fd51a3a9f68"} Feb 19 03:37:47.497951 master-0 kubenswrapper[33867]: I0219 03:37:47.497910 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:47.501039 master-0 kubenswrapper[33867]: I0219 03:37:47.500983 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" event={"ID":"8e845974-687e-4f15-961b-edf71c7dc316","Type":"ContainerStarted","Data":"a8dfd9300cb8dad97a73a5a3ccde41ea24a61a602ef9fea53849a76b57b124b0"} Feb 19 03:37:47.501180 master-0 kubenswrapper[33867]: I0219 03:37:47.501149 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:47.503078 master-0 kubenswrapper[33867]: I0219 03:37:47.503038 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" event={"ID":"3412b3eb-21b6-4166-9a78-b7c73f91d708","Type":"ContainerStarted","Data":"f36f80c940cf25c28f62160ba3824005c7eb3b171dc66f78980bb92fa63df3b6"} Feb 19 03:37:47.503186 master-0 kubenswrapper[33867]: I0219 03:37:47.503161 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:47.504760 master-0 kubenswrapper[33867]: I0219 03:37:47.504703 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" event={"ID":"bd663e7b-0774-48b5-bf36-9b28f553c2f8","Type":"ContainerStarted","Data":"5f3bc84204946a7b301b9a771e685f84cdc67e5cc52b68505aacee0bd66bac26"} Feb 19 03:37:47.504854 master-0 kubenswrapper[33867]: I0219 03:37:47.504819 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:47.510216 master-0 kubenswrapper[33867]: I0219 03:37:47.509885 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" event={"ID":"e2e7ed89-284a-4147-bcad-ec2520b9c64c","Type":"ContainerStarted","Data":"209fe88cd44d7cf6465a08b3dde741d0967d2768faca8c52cbc2c120d8d97ee7"} Feb 19 03:37:47.510353 master-0 kubenswrapper[33867]: I0219 03:37:47.510233 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:47.511885 master-0 kubenswrapper[33867]: I0219 03:37:47.511825 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" event={"ID":"1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9","Type":"ContainerStarted","Data":"73bd946821e244686b6e3ac18809283e760fc731a8020a8f999d6f2b77edb7a4"} Feb 19 03:37:47.512269 master-0 kubenswrapper[33867]: I0219 03:37:47.512209 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:47.513413 master-0 kubenswrapper[33867]: I0219 03:37:47.513344 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" event={"ID":"1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3","Type":"ContainerStarted","Data":"fde2296a0964b12f1704f8c77321fde69f9f841e9d742528c0afd96844f3cfa6"} Feb 19 03:37:47.514303 master-0 kubenswrapper[33867]: I0219 03:37:47.514275 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:47.517317 master-0 kubenswrapper[33867]: I0219 03:37:47.517246 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" event={"ID":"e9dacca4-e34c-4b78-97e3-c12b06b3738b","Type":"ContainerStarted","Data":"189e49d4859fee49934a06e38a7aaa1072527743f997159810063473a52ea855"} Feb 19 03:37:47.517527 master-0 kubenswrapper[33867]: I0219 03:37:47.517474 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:47.522186 master-0 kubenswrapper[33867]: I0219 03:37:47.522128 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" event={"ID":"26bcada5-2616-4d6f-82d6-0659611454af","Type":"ContainerStarted","Data":"364383ff8b1f14dca44f5645d4147cbe0922f0f56806198f016a6a74468820f5"} Feb 19 03:37:47.522309 master-0 kubenswrapper[33867]: I0219 03:37:47.522273 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:47.524272 master-0 kubenswrapper[33867]: I0219 03:37:47.524206 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" event={"ID":"5e2af2a9-057f-42b0-aed1-5473728c4a6d","Type":"ContainerStarted","Data":"18b3c822c21df63dd0fd0a01bdbd8744f861110f176b594bb99a80413fc71773"} Feb 19 03:37:47.524417 master-0 kubenswrapper[33867]: I0219 03:37:47.524392 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:47.525885 master-0 kubenswrapper[33867]: I0219 03:37:47.525830 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" event={"ID":"eb5820e0-2241-4fc0-a7ae-e2eb51b08653","Type":"ContainerStarted","Data":"dbcb5ba62d2146ff5aa5f77abf4de92af8926f6facdbf3f42daff79d5d95079e"} Feb 19 03:37:47.527658 master-0 kubenswrapper[33867]: I0219 03:37:47.527603 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" event={"ID":"3d0c427a-ffc4-4bea-a695-f1c50efb4c79","Type":"ContainerStarted","Data":"0efe1ec60e171a2387d54b98f96dcbf5243eaa35cef652669db8f3b12330b861"} Feb 19 03:37:47.527761 master-0 kubenswrapper[33867]: I0219 03:37:47.527711 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:47.532788 master-0 kubenswrapper[33867]: I0219 03:37:47.531835 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" event={"ID":"146322c9-d8f1-4aa5-af40-313a3226f9f0","Type":"ContainerStarted","Data":"babcb7302ae63ea10b2b59f53803f4148efe2f506d9155b441e6f5e80b5e16fc"} Feb 19 03:37:47.532788 master-0 kubenswrapper[33867]: I0219 03:37:47.532240 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:47.538093 master-0 kubenswrapper[33867]: I0219 03:37:47.537774 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" event={"ID":"116294f4-67e6-4af1-a23f-29012eeb2090","Type":"ContainerStarted","Data":"eaa78867bd7a2a7d65028b337166595eea655b4f8d2838155fad34a9b05f6482"} Feb 19 03:37:47.539407 master-0 kubenswrapper[33867]: I0219 03:37:47.538717 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:47.539407 master-0 kubenswrapper[33867]: I0219 03:37:47.538795 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:47.539407 master-0 kubenswrapper[33867]: I0219 03:37:47.538987 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:47.539609 master-0 kubenswrapper[33867]: I0219 03:37:47.539452 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" podStartSLOduration=3.735563496 podStartE2EDuration="24.539430161s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.122820352 +0000 UTC m=+850.419490963" lastFinishedPulling="2026-02-19 03:37:45.926687017 +0000 UTC m=+871.223357628" observedRunningTime="2026-02-19 03:37:47.537494966 +0000 UTC m=+872.834165577" watchObservedRunningTime="2026-02-19 03:37:47.539430161 +0000 UTC m=+872.836100782" Feb 19 03:37:47.575162 master-0 kubenswrapper[33867]: I0219 03:37:47.570349 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" podStartSLOduration=4.63101606 podStartE2EDuration="24.570320705s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.08444722 +0000 UTC m=+851.381117831" lastFinishedPulling="2026-02-19 03:37:46.023751855 +0000 UTC m=+871.320422476" observedRunningTime="2026-02-19 03:37:47.563708758 +0000 UTC m=+872.860379369" watchObservedRunningTime="2026-02-19 03:37:47.570320705 +0000 UTC m=+872.866991316" Feb 19 03:37:47.615491 master-0 kubenswrapper[33867]: I0219 03:37:47.615393 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" podStartSLOduration=4.194251744 podStartE2EDuration="24.615367831s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.510028116 +0000 UTC m=+850.806698727" lastFinishedPulling="2026-02-19 03:37:45.931144203 +0000 UTC m=+871.227814814" observedRunningTime="2026-02-19 03:37:47.611941194 +0000 UTC m=+872.908611805" watchObservedRunningTime="2026-02-19 03:37:47.615367831 +0000 UTC m=+872.912038452" Feb 19 03:37:47.675298 master-0 kubenswrapper[33867]: I0219 03:37:47.675185 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" podStartSLOduration=4.837693762 podStartE2EDuration="24.675090152s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.102893532 +0000 UTC m=+851.399564143" lastFinishedPulling="2026-02-19 03:37:45.940289902 +0000 UTC m=+871.236960533" observedRunningTime="2026-02-19 03:37:47.668429763 +0000 UTC m=+872.965100394" watchObservedRunningTime="2026-02-19 03:37:47.675090152 +0000 UTC m=+872.971760773" Feb 19 03:37:47.714288 master-0 kubenswrapper[33867]: I0219 03:37:47.713354 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" podStartSLOduration=4.602644372 podStartE2EDuration="25.713337025s" podCreationTimestamp="2026-02-19 03:37:22 +0000 UTC" firstStartedPulling="2026-02-19 03:37:24.820905393 +0000 UTC m=+850.117576004" lastFinishedPulling="2026-02-19 03:37:45.931598046 +0000 UTC m=+871.228268657" observedRunningTime="2026-02-19 03:37:47.707040497 +0000 UTC m=+873.003711098" watchObservedRunningTime="2026-02-19 03:37:47.713337025 +0000 UTC m=+873.010007636" Feb 19 03:37:47.765381 master-0 kubenswrapper[33867]: I0219 03:37:47.764993 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" podStartSLOduration=4.338185689 podStartE2EDuration="24.764975177s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.507138264 +0000 UTC m=+850.803808875" lastFinishedPulling="2026-02-19 03:37:45.933927752 +0000 UTC m=+871.230598363" observedRunningTime="2026-02-19 03:37:47.762431535 +0000 UTC m=+873.059102146" watchObservedRunningTime="2026-02-19 03:37:47.764975177 +0000 UTC m=+873.061645788" Feb 19 03:37:47.808285 master-0 kubenswrapper[33867]: I0219 03:37:47.808123 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" podStartSLOduration=3.9535764589999998 podStartE2EDuration="24.808093588s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.077159089 +0000 UTC m=+850.373829700" lastFinishedPulling="2026-02-19 03:37:45.931676208 +0000 UTC m=+871.228346829" observedRunningTime="2026-02-19 03:37:47.803838407 +0000 UTC m=+873.100509018" watchObservedRunningTime="2026-02-19 03:37:47.808093588 +0000 UTC m=+873.104764209" Feb 19 03:37:47.887064 master-0 kubenswrapper[33867]: I0219 03:37:47.882895 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" podStartSLOduration=4.433009434 podStartE2EDuration="24.882865755s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.482299541 +0000 UTC m=+850.778970152" lastFinishedPulling="2026-02-19 03:37:45.932155862 +0000 UTC m=+871.228826473" observedRunningTime="2026-02-19 03:37:47.870042292 +0000 UTC m=+873.166712903" watchObservedRunningTime="2026-02-19 03:37:47.882865755 +0000 UTC m=+873.179536366" Feb 19 03:37:47.887064 master-0 kubenswrapper[33867]: I0219 03:37:47.884325 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" podStartSLOduration=5.441294493 podStartE2EDuration="24.884318116s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.482409908 +0000 UTC m=+851.779080519" lastFinishedPulling="2026-02-19 03:37:45.925433531 +0000 UTC m=+871.222104142" observedRunningTime="2026-02-19 03:37:47.841928156 +0000 UTC m=+873.138598767" watchObservedRunningTime="2026-02-19 03:37:47.884318116 +0000 UTC m=+873.180988897" Feb 19 03:37:47.911234 master-0 kubenswrapper[33867]: I0219 03:37:47.911148 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" podStartSLOduration=5.094684739 podStartE2EDuration="24.911126535s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.116788376 +0000 UTC m=+851.413458987" lastFinishedPulling="2026-02-19 03:37:45.933230172 +0000 UTC m=+871.229900783" observedRunningTime="2026-02-19 03:37:47.910101146 +0000 UTC m=+873.206771757" watchObservedRunningTime="2026-02-19 03:37:47.911126535 +0000 UTC m=+873.207797146" Feb 19 03:37:47.966471 master-0 kubenswrapper[33867]: I0219 03:37:47.962804 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n" podStartSLOduration=5.481670955 podStartE2EDuration="24.962776297s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.496895708 +0000 UTC m=+851.793566319" lastFinishedPulling="2026-02-19 03:37:45.97800106 +0000 UTC m=+871.274671661" observedRunningTime="2026-02-19 03:37:47.956349895 +0000 UTC m=+873.253020506" watchObservedRunningTime="2026-02-19 03:37:47.962776297 +0000 UTC m=+873.259446908" Feb 19 03:37:48.029606 master-0 kubenswrapper[33867]: I0219 03:37:48.029438 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" podStartSLOduration=4.570004742 podStartE2EDuration="25.029421514s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.483980278 +0000 UTC m=+850.780650889" lastFinishedPulling="2026-02-19 03:37:45.94339704 +0000 UTC m=+871.240067661" observedRunningTime="2026-02-19 03:37:48.02784919 +0000 UTC m=+873.324519801" watchObservedRunningTime="2026-02-19 03:37:48.029421514 +0000 UTC m=+873.326092125" Feb 19 03:37:48.035316 master-0 kubenswrapper[33867]: I0219 03:37:48.035246 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" podStartSLOduration=4.210306408 podStartE2EDuration="25.035231009s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.10049778 +0000 UTC m=+850.397168391" lastFinishedPulling="2026-02-19 03:37:45.925422371 +0000 UTC m=+871.222092992" observedRunningTime="2026-02-19 03:37:47.98052051 +0000 UTC m=+873.277191121" watchObservedRunningTime="2026-02-19 03:37:48.035231009 +0000 UTC m=+873.331901620" Feb 19 03:37:48.065345 master-0 kubenswrapper[33867]: I0219 03:37:48.065234 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" podStartSLOduration=4.255763265 podStartE2EDuration="25.065212818s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:25.117928494 +0000 UTC m=+850.414599105" lastFinishedPulling="2026-02-19 03:37:45.927378047 +0000 UTC m=+871.224048658" observedRunningTime="2026-02-19 03:37:48.055114442 +0000 UTC m=+873.351785053" watchObservedRunningTime="2026-02-19 03:37:48.065212818 +0000 UTC m=+873.361883429" Feb 19 03:37:48.141971 master-0 kubenswrapper[33867]: I0219 03:37:48.141882 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" podStartSLOduration=6.25437768 podStartE2EDuration="26.141852348s" podCreationTimestamp="2026-02-19 03:37:22 +0000 UTC" firstStartedPulling="2026-02-19 03:37:24.68480035 +0000 UTC m=+849.981470961" lastFinishedPulling="2026-02-19 03:37:44.572275018 +0000 UTC m=+869.868945629" observedRunningTime="2026-02-19 03:37:48.110985084 +0000 UTC m=+873.407655685" watchObservedRunningTime="2026-02-19 03:37:48.141852348 +0000 UTC m=+873.438522959" Feb 19 03:37:48.152891 master-0 kubenswrapper[33867]: I0219 03:37:48.152800 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" podStartSLOduration=5.33207482 podStartE2EDuration="25.152774937s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.114292485 +0000 UTC m=+851.410963096" lastFinishedPulling="2026-02-19 03:37:45.934992512 +0000 UTC m=+871.231663213" observedRunningTime="2026-02-19 03:37:48.078492684 +0000 UTC m=+873.375163295" watchObservedRunningTime="2026-02-19 03:37:48.152774937 +0000 UTC m=+873.449445548" Feb 19 03:37:48.165046 master-0 kubenswrapper[33867]: I0219 03:37:48.164967 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" podStartSLOduration=5.647728668 podStartE2EDuration="25.164935681s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.415812503 +0000 UTC m=+851.712483114" lastFinishedPulling="2026-02-19 03:37:45.933019476 +0000 UTC m=+871.229690127" observedRunningTime="2026-02-19 03:37:48.144235715 +0000 UTC m=+873.440906326" watchObservedRunningTime="2026-02-19 03:37:48.164935681 +0000 UTC m=+873.461606282" Feb 19 03:37:48.210011 master-0 kubenswrapper[33867]: I0219 03:37:48.209739 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" podStartSLOduration=10.189710124 podStartE2EDuration="26.209715549s" podCreationTimestamp="2026-02-19 03:37:22 +0000 UTC" firstStartedPulling="2026-02-19 03:37:24.460806527 +0000 UTC m=+849.757477138" lastFinishedPulling="2026-02-19 03:37:40.480811952 +0000 UTC m=+865.777482563" observedRunningTime="2026-02-19 03:37:48.169882001 +0000 UTC m=+873.466552602" watchObservedRunningTime="2026-02-19 03:37:48.209715549 +0000 UTC m=+873.506386160" Feb 19 03:37:48.232272 master-0 kubenswrapper[33867]: I0219 03:37:48.232169 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" podStartSLOduration=5.402942546 podStartE2EDuration="25.232141564s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:26.102426339 +0000 UTC m=+851.399096950" lastFinishedPulling="2026-02-19 03:37:45.931625327 +0000 UTC m=+871.228295968" observedRunningTime="2026-02-19 03:37:48.22562052 +0000 UTC m=+873.522291131" watchObservedRunningTime="2026-02-19 03:37:48.232141564 +0000 UTC m=+873.528812175" Feb 19 03:37:53.391996 master-0 kubenswrapper[33867]: I0219 03:37:53.391857 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69" Feb 19 03:37:53.423549 master-0 kubenswrapper[33867]: I0219 03:37:53.423484 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk" Feb 19 03:37:53.442949 master-0 kubenswrapper[33867]: I0219 03:37:53.442875 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m" Feb 19 03:37:53.520325 master-0 kubenswrapper[33867]: I0219 03:37:53.518819 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2" Feb 19 03:37:53.551751 master-0 kubenswrapper[33867]: I0219 03:37:53.551348 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v" Feb 19 03:37:53.819945 master-0 kubenswrapper[33867]: I0219 03:37:53.819810 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h" Feb 19 03:37:53.874287 master-0 kubenswrapper[33867]: I0219 03:37:53.874202 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d" Feb 19 03:37:53.950383 master-0 kubenswrapper[33867]: I0219 03:37:53.949993 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz" Feb 19 03:37:53.990773 master-0 kubenswrapper[33867]: I0219 03:37:53.989691 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj" Feb 19 03:37:54.021220 master-0 kubenswrapper[33867]: I0219 03:37:54.021163 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd" Feb 19 03:37:54.042311 master-0 kubenswrapper[33867]: I0219 03:37:54.042234 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs" Feb 19 03:37:54.204931 master-0 kubenswrapper[33867]: I0219 03:37:54.204846 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm" Feb 19 03:37:54.248287 master-0 kubenswrapper[33867]: I0219 03:37:54.248223 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw" Feb 19 03:37:54.358959 master-0 kubenswrapper[33867]: I0219 03:37:54.358904 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k" Feb 19 03:37:54.514955 master-0 kubenswrapper[33867]: I0219 03:37:54.513322 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hqd26" Feb 19 03:37:54.553478 master-0 kubenswrapper[33867]: I0219 03:37:54.553397 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8" Feb 19 03:37:54.554339 master-0 kubenswrapper[33867]: I0219 03:37:54.554292 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g" Feb 19 03:37:54.583297 master-0 kubenswrapper[33867]: I0219 03:37:54.581201 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-dxk94" Feb 19 03:37:54.678236 master-0 kubenswrapper[33867]: I0219 03:37:54.678171 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk" Feb 19 03:37:55.366938 master-0 kubenswrapper[33867]: I0219 03:37:55.366848 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:55.370802 master-0 kubenswrapper[33867]: I0219 03:37:55.370755 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1554c3da-f309-402e-8d61-c12b1ef616bf-cert\") pod \"infra-operator-controller-manager-5f879c76b6-nzsnk\" (UID: \"1554c3da-f309-402e-8d61-c12b1ef616bf\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:55.664608 master-0 kubenswrapper[33867]: I0219 03:37:55.664139 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:55.979833 master-0 kubenswrapper[33867]: I0219 03:37:55.979728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:55.985333 master-0 kubenswrapper[33867]: I0219 03:37:55.985287 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3c70606-b8cd-4216-98e7-d73c7d31b443-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx\" (UID: \"e3c70606-b8cd-4216-98e7-d73c7d31b443\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:56.115617 master-0 kubenswrapper[33867]: W0219 03:37:56.115565 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1554c3da_f309_402e_8d61_c12b1ef616bf.slice/crio-16c8e06c878b306ca1b9c87f2538a838ec5256a4372a21efad687cc24b9075b2 WatchSource:0}: Error finding container 16c8e06c878b306ca1b9c87f2538a838ec5256a4372a21efad687cc24b9075b2: Status 404 returned error can't find the container with id 16c8e06c878b306ca1b9c87f2538a838ec5256a4372a21efad687cc24b9075b2 Feb 19 03:37:56.116025 master-0 kubenswrapper[33867]: I0219 03:37:56.115998 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:56.116837 master-0 kubenswrapper[33867]: I0219 03:37:56.116770 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk"] Feb 19 03:37:56.390466 master-0 kubenswrapper[33867]: I0219 03:37:56.389932 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:56.390466 master-0 kubenswrapper[33867]: I0219 03:37:56.390002 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:56.393815 master-0 kubenswrapper[33867]: I0219 03:37:56.393744 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:56.395658 master-0 kubenswrapper[33867]: I0219 03:37:56.395600 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a046d5fd-383b-4769-9912-a8ed83bf66a7-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-kgvls\" (UID: \"a046d5fd-383b-4769-9912-a8ed83bf66a7\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:56.519379 master-0 kubenswrapper[33867]: I0219 03:37:56.519186 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:56.647404 master-0 kubenswrapper[33867]: I0219 03:37:56.646153 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx"] Feb 19 03:37:56.681301 master-0 kubenswrapper[33867]: W0219 03:37:56.675357 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c70606_b8cd_4216_98e7_d73c7d31b443.slice/crio-dbe69873d322e07bef8a69d3ceb19fcf4eb2bdfbd47e150d202bdb14583472b9 WatchSource:0}: Error finding container dbe69873d322e07bef8a69d3ceb19fcf4eb2bdfbd47e150d202bdb14583472b9: Status 404 returned error can't find the container with id dbe69873d322e07bef8a69d3ceb19fcf4eb2bdfbd47e150d202bdb14583472b9 Feb 19 03:37:56.712291 master-0 kubenswrapper[33867]: I0219 03:37:56.709989 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" event={"ID":"1554c3da-f309-402e-8d61-c12b1ef616bf","Type":"ContainerStarted","Data":"16c8e06c878b306ca1b9c87f2538a838ec5256a4372a21efad687cc24b9075b2"} Feb 19 03:37:57.205135 master-0 kubenswrapper[33867]: I0219 03:37:57.205076 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls"] Feb 19 03:37:57.211039 master-0 kubenswrapper[33867]: W0219 03:37:57.210979 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda046d5fd_383b_4769_9912_a8ed83bf66a7.slice/crio-14a8b8c0bb9d4a99e799549b56b5399545eff2501ee674e7a9016769d54439a3 WatchSource:0}: Error finding container 14a8b8c0bb9d4a99e799549b56b5399545eff2501ee674e7a9016769d54439a3: Status 404 returned error can't find the container with id 14a8b8c0bb9d4a99e799549b56b5399545eff2501ee674e7a9016769d54439a3 Feb 19 03:37:57.722536 master-0 kubenswrapper[33867]: I0219 03:37:57.722452 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" event={"ID":"a046d5fd-383b-4769-9912-a8ed83bf66a7","Type":"ContainerStarted","Data":"e66f5be52b62a2e21ec895dbc4bc9bbe7359d14c34ce202217833b35fafd6002"} Feb 19 03:37:57.722536 master-0 kubenswrapper[33867]: I0219 03:37:57.722507 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" event={"ID":"a046d5fd-383b-4769-9912-a8ed83bf66a7","Type":"ContainerStarted","Data":"14a8b8c0bb9d4a99e799549b56b5399545eff2501ee674e7a9016769d54439a3"} Feb 19 03:37:57.723168 master-0 kubenswrapper[33867]: I0219 03:37:57.722558 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:37:57.724049 master-0 kubenswrapper[33867]: I0219 03:37:57.723985 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" event={"ID":"e3c70606-b8cd-4216-98e7-d73c7d31b443","Type":"ContainerStarted","Data":"dbe69873d322e07bef8a69d3ceb19fcf4eb2bdfbd47e150d202bdb14583472b9"} Feb 19 03:37:57.765472 master-0 kubenswrapper[33867]: I0219 03:37:57.765376 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" podStartSLOduration=34.765353379 podStartE2EDuration="34.765353379s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:37:57.754836901 +0000 UTC m=+883.051507532" watchObservedRunningTime="2026-02-19 03:37:57.765353379 +0000 UTC m=+883.062023990" Feb 19 03:37:59.749809 master-0 kubenswrapper[33867]: I0219 03:37:59.749727 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" event={"ID":"e3c70606-b8cd-4216-98e7-d73c7d31b443","Type":"ContainerStarted","Data":"824ecc129f6829a95c4b9a6075c87b26b087c6673fdfd333cfc28bb9d74a5dea"} Feb 19 03:37:59.750845 master-0 kubenswrapper[33867]: I0219 03:37:59.749824 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:37:59.752233 master-0 kubenswrapper[33867]: I0219 03:37:59.752145 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" event={"ID":"1554c3da-f309-402e-8d61-c12b1ef616bf","Type":"ContainerStarted","Data":"fd1676e9f6e5f3a1cac27326a12fd13942471496b3f1d9887bc901fc23e03fca"} Feb 19 03:37:59.752429 master-0 kubenswrapper[33867]: I0219 03:37:59.752242 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:37:59.790748 master-0 kubenswrapper[33867]: I0219 03:37:59.790648 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" podStartSLOduration=34.436860128 podStartE2EDuration="36.790627533s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:56.68924061 +0000 UTC m=+881.985911241" lastFinishedPulling="2026-02-19 03:37:59.043008035 +0000 UTC m=+884.339678646" observedRunningTime="2026-02-19 03:37:59.786625429 +0000 UTC m=+885.083296040" watchObservedRunningTime="2026-02-19 03:37:59.790627533 +0000 UTC m=+885.087298144" Feb 19 03:37:59.816399 master-0 kubenswrapper[33867]: I0219 03:37:59.816315 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" podStartSLOduration=33.907078266 podStartE2EDuration="36.816290239s" podCreationTimestamp="2026-02-19 03:37:23 +0000 UTC" firstStartedPulling="2026-02-19 03:37:56.128992156 +0000 UTC m=+881.425662767" lastFinishedPulling="2026-02-19 03:37:59.038204129 +0000 UTC m=+884.334874740" observedRunningTime="2026-02-19 03:37:59.805163654 +0000 UTC m=+885.101834275" watchObservedRunningTime="2026-02-19 03:37:59.816290239 +0000 UTC m=+885.112960850" Feb 19 03:38:05.672120 master-0 kubenswrapper[33867]: I0219 03:38:05.672044 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk" Feb 19 03:38:06.123952 master-0 kubenswrapper[33867]: I0219 03:38:06.123815 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx" Feb 19 03:38:06.528299 master-0 kubenswrapper[33867]: I0219 03:38:06.528188 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls" Feb 19 03:38:45.008809 master-0 kubenswrapper[33867]: I0219 03:38:45.007788 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:38:45.009993 master-0 kubenswrapper[33867]: I0219 03:38:45.009946 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.013497 master-0 kubenswrapper[33867]: I0219 03:38:45.013415 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 03:38:45.017276 master-0 kubenswrapper[33867]: I0219 03:38:45.013627 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 03:38:45.017276 master-0 kubenswrapper[33867]: I0219 03:38:45.013797 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 03:38:45.032620 master-0 kubenswrapper[33867]: I0219 03:38:45.031958 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:38:45.151288 master-0 kubenswrapper[33867]: I0219 03:38:45.120329 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.151288 master-0 kubenswrapper[33867]: I0219 03:38:45.120611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.172134 master-0 kubenswrapper[33867]: I0219 03:38:45.172065 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:38:45.177555 master-0 kubenswrapper[33867]: I0219 03:38:45.177490 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.182826 master-0 kubenswrapper[33867]: I0219 03:38:45.182772 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 03:38:45.223289 master-0 kubenswrapper[33867]: I0219 03:38:45.222747 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.223289 master-0 kubenswrapper[33867]: I0219 03:38:45.222849 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.223289 master-0 kubenswrapper[33867]: I0219 03:38:45.222896 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.223289 master-0 kubenswrapper[33867]: I0219 03:38:45.222947 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckp9\" (UniqueName: \"kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.223289 master-0 kubenswrapper[33867]: I0219 03:38:45.223073 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.227288 master-0 kubenswrapper[33867]: I0219 03:38:45.224542 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.227288 master-0 kubenswrapper[33867]: I0219 03:38:45.224600 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:38:45.250448 master-0 kubenswrapper[33867]: I0219 03:38:45.250401 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q\") pod \"dnsmasq-dns-5c7b6fb887-clxsg\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.329559 master-0 kubenswrapper[33867]: I0219 03:38:45.328916 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.329559 master-0 kubenswrapper[33867]: I0219 03:38:45.329082 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.329559 master-0 kubenswrapper[33867]: I0219 03:38:45.329182 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckp9\" (UniqueName: \"kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.330910 master-0 kubenswrapper[33867]: I0219 03:38:45.330539 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.331010 master-0 kubenswrapper[33867]: I0219 03:38:45.330931 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.349041 master-0 kubenswrapper[33867]: I0219 03:38:45.348980 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckp9\" (UniqueName: \"kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9\") pod \"dnsmasq-dns-7d78499c-58qg9\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.409476 master-0 kubenswrapper[33867]: I0219 03:38:45.409391 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:38:45.502429 master-0 kubenswrapper[33867]: I0219 03:38:45.502113 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:38:45.875240 master-0 kubenswrapper[33867]: I0219 03:38:45.875185 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:38:45.877187 master-0 kubenswrapper[33867]: W0219 03:38:45.877117 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e353061_bb90_4da1_b260_2a16e7d06a93.slice/crio-5ac2d13ee43d24d4d123baac270d4525bdcd1a08eb7eacaa55423d5e851484cd WatchSource:0}: Error finding container 5ac2d13ee43d24d4d123baac270d4525bdcd1a08eb7eacaa55423d5e851484cd: Status 404 returned error can't find the container with id 5ac2d13ee43d24d4d123baac270d4525bdcd1a08eb7eacaa55423d5e851484cd Feb 19 03:38:46.025841 master-0 kubenswrapper[33867]: I0219 03:38:46.025568 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:38:46.028786 master-0 kubenswrapper[33867]: W0219 03:38:46.028702 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28dc950c_b6dc_4720_bac2_555217e06bb3.slice/crio-851217c182751b4096b27b73b7ada47d387bcc65ff6c429969bb49cd0c92338e WatchSource:0}: Error finding container 851217c182751b4096b27b73b7ada47d387bcc65ff6c429969bb49cd0c92338e: Status 404 returned error can't find the container with id 851217c182751b4096b27b73b7ada47d387bcc65ff6c429969bb49cd0c92338e Feb 19 03:38:46.317982 master-0 kubenswrapper[33867]: I0219 03:38:46.317855 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" event={"ID":"2e353061-bb90-4da1-b260-2a16e7d06a93","Type":"ContainerStarted","Data":"5ac2d13ee43d24d4d123baac270d4525bdcd1a08eb7eacaa55423d5e851484cd"} Feb 19 03:38:46.319797 master-0 kubenswrapper[33867]: I0219 03:38:46.319743 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-58qg9" event={"ID":"28dc950c-b6dc-4720-bac2-555217e06bb3","Type":"ContainerStarted","Data":"851217c182751b4096b27b73b7ada47d387bcc65ff6c429969bb49cd0c92338e"} Feb 19 03:38:47.263869 master-0 kubenswrapper[33867]: I0219 03:38:47.262366 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:38:47.291661 master-0 kubenswrapper[33867]: I0219 03:38:47.291597 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:38:47.295998 master-0 kubenswrapper[33867]: I0219 03:38:47.295943 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.327721 master-0 kubenswrapper[33867]: I0219 03:38:47.327658 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:38:47.389104 master-0 kubenswrapper[33867]: I0219 03:38:47.389042 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j47r\" (UniqueName: \"kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.389365 master-0 kubenswrapper[33867]: I0219 03:38:47.389138 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.389365 master-0 kubenswrapper[33867]: I0219 03:38:47.389217 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.491819 master-0 kubenswrapper[33867]: I0219 03:38:47.491746 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j47r\" (UniqueName: \"kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.492054 master-0 kubenswrapper[33867]: I0219 03:38:47.491876 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.492054 master-0 kubenswrapper[33867]: I0219 03:38:47.492013 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.492981 master-0 kubenswrapper[33867]: I0219 03:38:47.492939 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.493097 master-0 kubenswrapper[33867]: I0219 03:38:47.493066 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.512856 master-0 kubenswrapper[33867]: I0219 03:38:47.512784 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j47r\" (UniqueName: \"kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r\") pod \"dnsmasq-dns-5bcd98d69f-vxzzp\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.646818 master-0 kubenswrapper[33867]: I0219 03:38:47.646745 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:38:47.757439 master-0 kubenswrapper[33867]: I0219 03:38:47.757358 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:38:47.818377 master-0 kubenswrapper[33867]: I0219 03:38:47.818307 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:38:47.821212 master-0 kubenswrapper[33867]: I0219 03:38:47.821170 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:47.829404 master-0 kubenswrapper[33867]: I0219 03:38:47.829358 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:38:47.903587 master-0 kubenswrapper[33867]: I0219 03:38:47.903449 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:47.903587 master-0 kubenswrapper[33867]: I0219 03:38:47.903525 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvd6v\" (UniqueName: \"kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:47.903882 master-0 kubenswrapper[33867]: I0219 03:38:47.903607 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.014280 master-0 kubenswrapper[33867]: I0219 03:38:48.013840 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.014280 master-0 kubenswrapper[33867]: I0219 03:38:48.013977 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvd6v\" (UniqueName: \"kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.014280 master-0 kubenswrapper[33867]: I0219 03:38:48.014141 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.018590 master-0 kubenswrapper[33867]: I0219 03:38:48.017385 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.018590 master-0 kubenswrapper[33867]: I0219 03:38:48.017415 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.056160 master-0 kubenswrapper[33867]: I0219 03:38:48.056109 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvd6v\" (UniqueName: \"kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v\") pod \"dnsmasq-dns-6b98d7b55c-vwbwn\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.195826 master-0 kubenswrapper[33867]: I0219 03:38:48.195688 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:38:48.457453 master-0 kubenswrapper[33867]: I0219 03:38:48.457332 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:38:48.814118 master-0 kubenswrapper[33867]: I0219 03:38:48.814037 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:38:49.698354 master-0 kubenswrapper[33867]: I0219 03:38:49.370601 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" event={"ID":"b1659bdb-92e9-4f41-b10a-552e4a31af0b","Type":"ContainerStarted","Data":"3170c022299bf9920cb8ecba5643b3416ce846336b659ab6b7964df845a6f282"} Feb 19 03:38:49.698354 master-0 kubenswrapper[33867]: I0219 03:38:49.373271 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" event={"ID":"5688ca74-8693-4449-87e8-62145a078d1c","Type":"ContainerStarted","Data":"6238490e83b44e37563b3d17a6e6eb925eec30d4233c88c3a261bd0ad6e8c4a3"} Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.488104 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.489873 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.492883 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.493192 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.493583 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.494051 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.494234 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 03:38:51.496896 master-0 kubenswrapper[33867]: I0219 03:38:51.494405 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.517912 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518032 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518077 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518116 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxcz\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-kube-api-access-wsxcz\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518157 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518184 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518214 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518335 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518391 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518415 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.519718 master-0 kubenswrapper[33867]: I0219 03:38:51.518442 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8870522-d83b-40a5-be67-194c409af521\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cf0bb94a-a88b-4684-a022-69d8879cd0eb\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.557027 master-0 kubenswrapper[33867]: I0219 03:38:51.547488 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 03:38:51.638877 master-0 kubenswrapper[33867]: I0219 03:38:51.638802 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639112 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639195 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxcz\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-kube-api-access-wsxcz\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639236 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639273 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639557 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639599 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639651 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639675 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639695 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8870522-d83b-40a5-be67-194c409af521\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cf0bb94a-a88b-4684-a022-69d8879cd0eb\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.639761 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641471 master-0 kubenswrapper[33867]: I0219 03:38:51.640138 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641884 master-0 kubenswrapper[33867]: I0219 03:38:51.641705 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.641884 master-0 kubenswrapper[33867]: I0219 03:38:51.641784 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.642594 master-0 kubenswrapper[33867]: I0219 03:38:51.642359 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.643782 master-0 kubenswrapper[33867]: I0219 03:38:51.643713 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.656519 master-0 kubenswrapper[33867]: I0219 03:38:51.646942 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.656519 master-0 kubenswrapper[33867]: I0219 03:38:51.647127 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.671953 master-0 kubenswrapper[33867]: I0219 03:38:51.662310 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:38:51.671953 master-0 kubenswrapper[33867]: I0219 03:38:51.662369 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8870522-d83b-40a5-be67-194c409af521\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cf0bb94a-a88b-4684-a022-69d8879cd0eb\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/284a7cb67e4e6c1b388f28c1f5a8e8e8bd870054fd4f28589c8eef0d254f8780/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.672219 master-0 kubenswrapper[33867]: I0219 03:38:51.672158 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxcz\" (UniqueName: \"kubernetes.io/projected/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-kube-api-access-wsxcz\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.699352 master-0 kubenswrapper[33867]: I0219 03:38:51.692369 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.715239 master-0 kubenswrapper[33867]: I0219 03:38:51.715189 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e764204-85e6-4bcf-bdd4-6c24e78d4e3b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:51.965817 master-0 kubenswrapper[33867]: I0219 03:38:51.965613 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 03:38:51.975281 master-0 kubenswrapper[33867]: I0219 03:38:51.969223 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 03:38:51.986190 master-0 kubenswrapper[33867]: I0219 03:38:51.976889 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 03:38:52.002729 master-0 kubenswrapper[33867]: I0219 03:38:52.001390 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 03:38:52.002729 master-0 kubenswrapper[33867]: I0219 03:38:52.001731 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 03:38:52.021609 master-0 kubenswrapper[33867]: I0219 03:38:52.021566 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 03:38:52.057910 master-0 kubenswrapper[33867]: I0219 03:38:52.057844 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgrj\" (UniqueName: \"kubernetes.io/projected/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kube-api-access-rdgrj\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.058184 master-0 kubenswrapper[33867]: I0219 03:38:52.057992 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.058184 master-0 kubenswrapper[33867]: I0219 03:38:52.058105 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.058346 master-0 kubenswrapper[33867]: I0219 03:38:52.058193 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kolla-config\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.058346 master-0 kubenswrapper[33867]: I0219 03:38:52.058225 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-config-data\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.160740 master-0 kubenswrapper[33867]: I0219 03:38:52.160684 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgrj\" (UniqueName: \"kubernetes.io/projected/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kube-api-access-rdgrj\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.160998 master-0 kubenswrapper[33867]: I0219 03:38:52.160818 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.160998 master-0 kubenswrapper[33867]: I0219 03:38:52.160914 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.161353 master-0 kubenswrapper[33867]: I0219 03:38:52.161284 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kolla-config\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.161428 master-0 kubenswrapper[33867]: I0219 03:38:52.161376 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-config-data\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.162104 master-0 kubenswrapper[33867]: I0219 03:38:52.162050 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kolla-config\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.162941 master-0 kubenswrapper[33867]: I0219 03:38:52.162894 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb7d7589-8708-4f52-8e83-f9a47aeb438a-config-data\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.179781 master-0 kubenswrapper[33867]: I0219 03:38:52.179730 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.180204 master-0 kubenswrapper[33867]: I0219 03:38:52.180163 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7d7589-8708-4f52-8e83-f9a47aeb438a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.189417 master-0 kubenswrapper[33867]: I0219 03:38:52.189316 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgrj\" (UniqueName: \"kubernetes.io/projected/eb7d7589-8708-4f52-8e83-f9a47aeb438a-kube-api-access-rdgrj\") pod \"memcached-0\" (UID: \"eb7d7589-8708-4f52-8e83-f9a47aeb438a\") " pod="openstack/memcached-0" Feb 19 03:38:52.376614 master-0 kubenswrapper[33867]: I0219 03:38:52.376401 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 03:38:53.360532 master-0 kubenswrapper[33867]: I0219 03:38:53.360394 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 03:38:53.771041 master-0 kubenswrapper[33867]: I0219 03:38:53.770916 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8870522-d83b-40a5-be67-194c409af521\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cf0bb94a-a88b-4684-a022-69d8879cd0eb\") pod \"rabbitmq-server-0\" (UID: \"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b\") " pod="openstack/rabbitmq-server-0" Feb 19 03:38:53.995987 master-0 kubenswrapper[33867]: I0219 03:38:53.981736 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 03:38:55.753232 master-0 kubenswrapper[33867]: I0219 03:38:55.745840 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 03:38:55.754525 master-0 kubenswrapper[33867]: I0219 03:38:55.754455 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:55.756915 master-0 kubenswrapper[33867]: I0219 03:38:55.756768 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 03:38:55.757162 master-0 kubenswrapper[33867]: I0219 03:38:55.757061 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 03:38:55.757586 master-0 kubenswrapper[33867]: I0219 03:38:55.757557 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 03:38:55.757791 master-0 kubenswrapper[33867]: I0219 03:38:55.757764 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 03:38:55.757956 master-0 kubenswrapper[33867]: I0219 03:38:55.757932 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 03:38:55.758361 master-0 kubenswrapper[33867]: I0219 03:38:55.758304 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 03:38:55.782297 master-0 kubenswrapper[33867]: I0219 03:38:55.782214 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:55.998708 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8f266ad-7296-44dc-b02c-cec2549d96ff\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c77a38f8-44f8-4233-90e8-a57846930ade\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:55.998800 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16fae78-0a83-4085-a9b5-896938c7d1b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:55.998962 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.000945 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16fae78-0a83-4085-a9b5-896938c7d1b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001036 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001104 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001184 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001279 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001381 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57c2m\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-kube-api-access-57c2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001448 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.003359 master-0 kubenswrapper[33867]: I0219 03:38:56.001598 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104516 master-0 kubenswrapper[33867]: I0219 03:38:56.104429 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104516 master-0 kubenswrapper[33867]: I0219 03:38:56.104522 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104825 master-0 kubenswrapper[33867]: I0219 03:38:56.104574 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57c2m\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-kube-api-access-57c2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104825 master-0 kubenswrapper[33867]: I0219 03:38:56.104603 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104825 master-0 kubenswrapper[33867]: I0219 03:38:56.104723 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104825 master-0 kubenswrapper[33867]: I0219 03:38:56.104775 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8f266ad-7296-44dc-b02c-cec2549d96ff\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c77a38f8-44f8-4233-90e8-a57846930ade\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.104825 master-0 kubenswrapper[33867]: I0219 03:38:56.104804 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16fae78-0a83-4085-a9b5-896938c7d1b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105039 master-0 kubenswrapper[33867]: I0219 03:38:56.104879 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105039 master-0 kubenswrapper[33867]: I0219 03:38:56.104952 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16fae78-0a83-4085-a9b5-896938c7d1b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105039 master-0 kubenswrapper[33867]: I0219 03:38:56.104993 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105039 master-0 kubenswrapper[33867]: I0219 03:38:56.105019 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105198 master-0 kubenswrapper[33867]: I0219 03:38:56.105077 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.105278 master-0 kubenswrapper[33867]: I0219 03:38:56.105199 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.106532 master-0 kubenswrapper[33867]: I0219 03:38:56.106487 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.107025 master-0 kubenswrapper[33867]: I0219 03:38:56.106959 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.107089 master-0 kubenswrapper[33867]: I0219 03:38:56.107004 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16fae78-0a83-4085-a9b5-896938c7d1b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.108967 master-0 kubenswrapper[33867]: I0219 03:38:56.108912 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:38:56.109055 master-0 kubenswrapper[33867]: I0219 03:38:56.108974 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8f266ad-7296-44dc-b02c-cec2549d96ff\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c77a38f8-44f8-4233-90e8-a57846930ade\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1e849f9331f2a3983b7435eea2ca0a5b1129fac2097330451821a59cf9c887c7/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.110897 master-0 kubenswrapper[33867]: I0219 03:38:56.110831 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16fae78-0a83-4085-a9b5-896938c7d1b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.112157 master-0 kubenswrapper[33867]: I0219 03:38:56.112112 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.113399 master-0 kubenswrapper[33867]: I0219 03:38:56.113352 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.113475 master-0 kubenswrapper[33867]: I0219 03:38:56.113448 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16fae78-0a83-4085-a9b5-896938c7d1b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.265241 master-0 kubenswrapper[33867]: I0219 03:38:56.265124 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57c2m\" (UniqueName: \"kubernetes.io/projected/d16fae78-0a83-4085-a9b5-896938c7d1b3-kube-api-access-57c2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:56.342154 master-0 kubenswrapper[33867]: I0219 03:38:56.342081 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 03:38:56.344558 master-0 kubenswrapper[33867]: I0219 03:38:56.344522 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 03:38:56.347958 master-0 kubenswrapper[33867]: I0219 03:38:56.347908 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 03:38:56.350732 master-0 kubenswrapper[33867]: I0219 03:38:56.349679 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 03:38:56.350732 master-0 kubenswrapper[33867]: I0219 03:38:56.349954 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 03:38:56.355355 master-0 kubenswrapper[33867]: I0219 03:38:56.355166 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 03:38:56.512844 master-0 kubenswrapper[33867]: I0219 03:38:56.512757 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34a8e5d4-881b-42a4-9872-a48d93f24687\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bee63d54-840a-4570-b0f3-8700b3a526a1\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513113 master-0 kubenswrapper[33867]: I0219 03:38:56.512907 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513113 master-0 kubenswrapper[33867]: I0219 03:38:56.512958 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513113 master-0 kubenswrapper[33867]: I0219 03:38:56.512978 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513113 master-0 kubenswrapper[33867]: I0219 03:38:56.512995 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513312 master-0 kubenswrapper[33867]: I0219 03:38:56.513138 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513373 master-0 kubenswrapper[33867]: I0219 03:38:56.513249 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.513413 master-0 kubenswrapper[33867]: I0219 03:38:56.513381 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvd7\" (UniqueName: \"kubernetes.io/projected/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kube-api-access-jwvd7\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615185 master-0 kubenswrapper[33867]: I0219 03:38:56.615044 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615185 master-0 kubenswrapper[33867]: I0219 03:38:56.615182 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615528 master-0 kubenswrapper[33867]: I0219 03:38:56.615205 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615528 master-0 kubenswrapper[33867]: I0219 03:38:56.615224 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615528 master-0 kubenswrapper[33867]: I0219 03:38:56.615359 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615690 master-0 kubenswrapper[33867]: I0219 03:38:56.615641 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615730 master-0 kubenswrapper[33867]: I0219 03:38:56.615682 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.615792 master-0 kubenswrapper[33867]: I0219 03:38:56.615764 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvd7\" (UniqueName: \"kubernetes.io/projected/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kube-api-access-jwvd7\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.616032 master-0 kubenswrapper[33867]: I0219 03:38:56.616005 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34a8e5d4-881b-42a4-9872-a48d93f24687\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bee63d54-840a-4570-b0f3-8700b3a526a1\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.621814 master-0 kubenswrapper[33867]: I0219 03:38:56.616429 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.621814 master-0 kubenswrapper[33867]: I0219 03:38:56.616456 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.621814 master-0 kubenswrapper[33867]: I0219 03:38:56.617131 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.621814 master-0 kubenswrapper[33867]: I0219 03:38:56.618931 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:38:56.621814 master-0 kubenswrapper[33867]: I0219 03:38:56.618956 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34a8e5d4-881b-42a4-9872-a48d93f24687\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bee63d54-840a-4570-b0f3-8700b3a526a1\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bb7a9bdbc30c9b7789639b90b8807014a865f47dbc47f28099e16762d401e749/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 03:38:56.622443 master-0 kubenswrapper[33867]: I0219 03:38:56.621896 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.637469 master-0 kubenswrapper[33867]: I0219 03:38:56.636936 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:56.637747 master-0 kubenswrapper[33867]: I0219 03:38:56.637550 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvd7\" (UniqueName: \"kubernetes.io/projected/9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1-kube-api-access-jwvd7\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:38:57.647376 master-0 kubenswrapper[33867]: I0219 03:38:57.646852 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 03:38:57.649806 master-0 kubenswrapper[33867]: I0219 03:38:57.649743 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.655419 master-0 kubenswrapper[33867]: I0219 03:38:57.653417 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 03:38:57.658034 master-0 kubenswrapper[33867]: I0219 03:38:57.656603 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 03:38:57.658034 master-0 kubenswrapper[33867]: I0219 03:38:57.656828 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 03:38:57.695406 master-0 kubenswrapper[33867]: I0219 03:38:57.695337 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 03:38:57.795657 master-0 kubenswrapper[33867]: I0219 03:38:57.795567 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.795657 master-0 kubenswrapper[33867]: I0219 03:38:57.795661 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.796124 master-0 kubenswrapper[33867]: I0219 03:38:57.796075 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.796201 master-0 kubenswrapper[33867]: I0219 03:38:57.796133 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.797440 master-0 kubenswrapper[33867]: I0219 03:38:57.797410 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.797528 master-0 kubenswrapper[33867]: I0219 03:38:57.797451 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0976216d-ab11-467d-8e90-5a4d24ead25b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87e66b24-f425-430b-84d1-524551539af4\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.797528 master-0 kubenswrapper[33867]: I0219 03:38:57.797477 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.797666 master-0 kubenswrapper[33867]: I0219 03:38:57.797641 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmzl\" (UniqueName: \"kubernetes.io/projected/d2176305-52ee-4689-a5f6-1aea00a75d4f-kube-api-access-7vmzl\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.840012 master-0 kubenswrapper[33867]: W0219 03:38:57.839939 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb7d7589_8708_4f52_8e83_f9a47aeb438a.slice/crio-31520ceb277887d32e1f8f02e01cb35c4220485553afc957d37769163dd6e682 WatchSource:0}: Error finding container 31520ceb277887d32e1f8f02e01cb35c4220485553afc957d37769163dd6e682: Status 404 returned error can't find the container with id 31520ceb277887d32e1f8f02e01cb35c4220485553afc957d37769163dd6e682 Feb 19 03:38:57.900117 master-0 kubenswrapper[33867]: I0219 03:38:57.900003 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.900117 master-0 kubenswrapper[33867]: I0219 03:38:57.900083 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.900378 master-0 kubenswrapper[33867]: I0219 03:38:57.900288 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900663 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900703 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900737 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0976216d-ab11-467d-8e90-5a4d24ead25b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87e66b24-f425-430b-84d1-524551539af4\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900764 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.900814 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmzl\" (UniqueName: \"kubernetes.io/projected/d2176305-52ee-4689-a5f6-1aea00a75d4f-kube-api-access-7vmzl\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.901922 master-0 kubenswrapper[33867]: I0219 03:38:57.901702 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.902557 master-0 kubenswrapper[33867]: I0219 03:38:57.902538 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.902842 master-0 kubenswrapper[33867]: I0219 03:38:57.902761 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:38:57.902842 master-0 kubenswrapper[33867]: I0219 03:38:57.902798 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0976216d-ab11-467d-8e90-5a4d24ead25b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87e66b24-f425-430b-84d1-524551539af4\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/323ee6b9cff7cff688d19d3dee5e7906c6edc3cbfcba86808a73f91c8f4807a1/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.903547 master-0 kubenswrapper[33867]: I0219 03:38:57.903477 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2176305-52ee-4689-a5f6-1aea00a75d4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.904169 master-0 kubenswrapper[33867]: I0219 03:38:57.904089 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.905599 master-0 kubenswrapper[33867]: I0219 03:38:57.905539 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2176305-52ee-4689-a5f6-1aea00a75d4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:57.924905 master-0 kubenswrapper[33867]: I0219 03:38:57.924749 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmzl\" (UniqueName: \"kubernetes.io/projected/d2176305-52ee-4689-a5f6-1aea00a75d4f-kube-api-access-7vmzl\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:38:58.508681 master-0 kubenswrapper[33867]: I0219 03:38:58.508112 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb7d7589-8708-4f52-8e83-f9a47aeb438a","Type":"ContainerStarted","Data":"31520ceb277887d32e1f8f02e01cb35c4220485553afc957d37769163dd6e682"} Feb 19 03:38:58.670300 master-0 kubenswrapper[33867]: I0219 03:38:58.670227 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96jnp"] Feb 19 03:38:58.671842 master-0 kubenswrapper[33867]: I0219 03:38:58.671811 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.681007 master-0 kubenswrapper[33867]: I0219 03:38:58.680887 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 03:38:58.681385 master-0 kubenswrapper[33867]: I0219 03:38:58.681357 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 03:38:58.683336 master-0 kubenswrapper[33867]: I0219 03:38:58.682103 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8f266ad-7296-44dc-b02c-cec2549d96ff\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c77a38f8-44f8-4233-90e8-a57846930ade\") pod \"rabbitmq-cell1-server-0\" (UID: \"d16fae78-0a83-4085-a9b5-896938c7d1b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:58.689948 master-0 kubenswrapper[33867]: I0219 03:38:58.689381 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96jnp"] Feb 19 03:38:58.732310 master-0 kubenswrapper[33867]: I0219 03:38:58.731122 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pfn5s"] Feb 19 03:38:58.744310 master-0 kubenswrapper[33867]: I0219 03:38:58.742663 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.761468 master-0 kubenswrapper[33867]: I0219 03:38:58.758486 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfn5s"] Feb 19 03:38:58.786093 master-0 kubenswrapper[33867]: I0219 03:38:58.786045 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:38:58.824623 master-0 kubenswrapper[33867]: I0219 03:38:58.824567 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-scripts\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.826034 master-0 kubenswrapper[33867]: I0219 03:38:58.825977 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-combined-ca-bundle\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.826421 master-0 kubenswrapper[33867]: I0219 03:38:58.826393 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.826623 master-0 kubenswrapper[33867]: I0219 03:38:58.826596 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.827016 master-0 kubenswrapper[33867]: I0219 03:38:58.826991 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-log-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.827297 master-0 kubenswrapper[33867]: I0219 03:38:58.827278 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62k5x\" (UniqueName: \"kubernetes.io/projected/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-kube-api-access-62k5x\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.827628 master-0 kubenswrapper[33867]: I0219 03:38:58.827609 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-ovn-controller-tls-certs\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.930928 master-0 kubenswrapper[33867]: I0219 03:38:58.930839 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62k5x\" (UniqueName: \"kubernetes.io/projected/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-kube-api-access-62k5x\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932486 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-ovn-controller-tls-certs\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932642 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-log\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932755 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-run\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932803 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-scripts\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932830 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7799\" (UniqueName: \"kubernetes.io/projected/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-kube-api-access-x7799\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932926 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-scripts\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.933235 master-0 kubenswrapper[33867]: I0219 03:38:58.932966 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-combined-ca-bundle\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.935802 master-0 kubenswrapper[33867]: I0219 03:38:58.935681 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.935985 master-0 kubenswrapper[33867]: I0219 03:38:58.935799 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.935985 master-0 kubenswrapper[33867]: I0219 03:38:58.935966 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-etc-ovs\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.936180 master-0 kubenswrapper[33867]: I0219 03:38:58.936037 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-lib\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:58.936180 master-0 kubenswrapper[33867]: I0219 03:38:58.936126 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-log-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.936323 master-0 kubenswrapper[33867]: I0219 03:38:58.936239 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.936392 master-0 kubenswrapper[33867]: I0219 03:38:58.936305 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-run-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.937288 master-0 kubenswrapper[33867]: I0219 03:38:58.937032 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-var-log-ovn\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.937288 master-0 kubenswrapper[33867]: I0219 03:38:58.937186 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-scripts\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.939308 master-0 kubenswrapper[33867]: I0219 03:38:58.939239 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-combined-ca-bundle\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.944022 master-0 kubenswrapper[33867]: I0219 03:38:58.943955 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-ovn-controller-tls-certs\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:58.952280 master-0 kubenswrapper[33867]: I0219 03:38:58.952226 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62k5x\" (UniqueName: \"kubernetes.io/projected/c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd-kube-api-access-62k5x\") pod \"ovn-controller-96jnp\" (UID: \"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd\") " pod="openstack/ovn-controller-96jnp" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.040715 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-lib\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.041315 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-log\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.041411 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-run\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.041467 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7799\" (UniqueName: \"kubernetes.io/projected/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-kube-api-access-x7799\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.041544 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-scripts\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.041758 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-etc-ovs\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.042378 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-etc-ovs\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.042558 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-lib\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.042674 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-log\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.044974 master-0 kubenswrapper[33867]: I0219 03:38:59.042728 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-var-run\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.046887 master-0 kubenswrapper[33867]: I0219 03:38:59.046846 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp" Feb 19 03:38:59.063974 master-0 kubenswrapper[33867]: I0219 03:38:59.063890 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-scripts\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.078588 master-0 kubenswrapper[33867]: I0219 03:38:59.078279 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7799\" (UniqueName: \"kubernetes.io/projected/8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0-kube-api-access-x7799\") pod \"ovn-controller-ovs-pfn5s\" (UID: \"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0\") " pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:38:59.087349 master-0 kubenswrapper[33867]: I0219 03:38:59.086969 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:39:00.307741 master-0 kubenswrapper[33867]: I0219 03:39:00.307655 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34a8e5d4-881b-42a4-9872-a48d93f24687\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bee63d54-840a-4570-b0f3-8700b3a526a1\") pod \"openstack-galera-0\" (UID: \"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1\") " pod="openstack/openstack-galera-0" Feb 19 03:39:00.568937 master-0 kubenswrapper[33867]: I0219 03:39:00.568810 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 03:39:01.116544 master-0 kubenswrapper[33867]: I0219 03:39:01.116481 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 03:39:01.118971 master-0 kubenswrapper[33867]: I0219 03:39:01.118922 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.125506 master-0 kubenswrapper[33867]: I0219 03:39:01.125457 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 03:39:01.125668 master-0 kubenswrapper[33867]: I0219 03:39:01.125637 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 03:39:01.125750 master-0 kubenswrapper[33867]: I0219 03:39:01.125710 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 03:39:01.125829 master-0 kubenswrapper[33867]: I0219 03:39:01.125473 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 03:39:01.135618 master-0 kubenswrapper[33867]: I0219 03:39:01.131172 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 03:39:01.188850 master-0 kubenswrapper[33867]: I0219 03:39:01.188619 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.188850 master-0 kubenswrapper[33867]: I0219 03:39:01.188706 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189210 master-0 kubenswrapper[33867]: I0219 03:39:01.188871 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8rc\" (UniqueName: \"kubernetes.io/projected/79fcdad5-1265-4636-af92-ede5356e0f6a-kube-api-access-kc8rc\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189210 master-0 kubenswrapper[33867]: I0219 03:39:01.189161 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189329 master-0 kubenswrapper[33867]: I0219 03:39:01.189313 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189585 master-0 kubenswrapper[33867]: I0219 03:39:01.189546 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-config\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189687 master-0 kubenswrapper[33867]: I0219 03:39:01.189653 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.189751 master-0 kubenswrapper[33867]: I0219 03:39:01.189722 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e76c2b2b-38ea-4454-a5bc-f5eba7f7822e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a039e5dc-bad8-4145-9f78-91fd73381e35\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.292407 master-0 kubenswrapper[33867]: I0219 03:39:01.292323 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-config\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.292720 master-0 kubenswrapper[33867]: I0219 03:39:01.292446 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.292720 master-0 kubenswrapper[33867]: I0219 03:39:01.292519 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e76c2b2b-38ea-4454-a5bc-f5eba7f7822e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a039e5dc-bad8-4145-9f78-91fd73381e35\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.292720 master-0 kubenswrapper[33867]: I0219 03:39:01.292594 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.292720 master-0 kubenswrapper[33867]: I0219 03:39:01.292627 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.293100 master-0 kubenswrapper[33867]: I0219 03:39:01.293010 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8rc\" (UniqueName: \"kubernetes.io/projected/79fcdad5-1265-4636-af92-ede5356e0f6a-kube-api-access-kc8rc\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.293233 master-0 kubenswrapper[33867]: I0219 03:39:01.293211 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.293498 master-0 kubenswrapper[33867]: I0219 03:39:01.293466 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.293591 master-0 kubenswrapper[33867]: I0219 03:39:01.293564 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-config\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.293768 master-0 kubenswrapper[33867]: I0219 03:39:01.293693 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.294101 master-0 kubenswrapper[33867]: I0219 03:39:01.294067 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79fcdad5-1265-4636-af92-ede5356e0f6a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.295883 master-0 kubenswrapper[33867]: I0219 03:39:01.295854 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:39:01.295959 master-0 kubenswrapper[33867]: I0219 03:39:01.295888 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e76c2b2b-38ea-4454-a5bc-f5eba7f7822e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a039e5dc-bad8-4145-9f78-91fd73381e35\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a16b3d4a807b76c211143e30348bc6eb65bbc8ab233074499556d67a3e018f7e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.298115 master-0 kubenswrapper[33867]: I0219 03:39:01.298055 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.298275 master-0 kubenswrapper[33867]: I0219 03:39:01.298129 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.310846 master-0 kubenswrapper[33867]: I0219 03:39:01.310771 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79fcdad5-1265-4636-af92-ede5356e0f6a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.311687 master-0 kubenswrapper[33867]: I0219 03:39:01.311643 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8rc\" (UniqueName: \"kubernetes.io/projected/79fcdad5-1265-4636-af92-ede5356e0f6a-kube-api-access-kc8rc\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:01.405229 master-0 kubenswrapper[33867]: I0219 03:39:01.405127 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0976216d-ab11-467d-8e90-5a4d24ead25b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87e66b24-f425-430b-84d1-524551539af4\") pod \"openstack-cell1-galera-0\" (UID: \"d2176305-52ee-4689-a5f6-1aea00a75d4f\") " pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:01.634246 master-0 kubenswrapper[33867]: I0219 03:39:01.634148 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:02.526655 master-0 kubenswrapper[33867]: I0219 03:39:02.526545 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 03:39:02.528436 master-0 kubenswrapper[33867]: I0219 03:39:02.528400 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.534582 master-0 kubenswrapper[33867]: I0219 03:39:02.534535 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 03:39:02.534904 master-0 kubenswrapper[33867]: I0219 03:39:02.534796 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 03:39:02.535916 master-0 kubenswrapper[33867]: I0219 03:39:02.535898 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 03:39:02.620154 master-0 kubenswrapper[33867]: I0219 03:39:02.620009 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 03:39:02.743007 master-0 kubenswrapper[33867]: I0219 03:39:02.742925 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e76c2b2b-38ea-4454-a5bc-f5eba7f7822e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a039e5dc-bad8-4145-9f78-91fd73381e35\") pod \"ovsdbserver-nb-0\" (UID: \"79fcdad5-1265-4636-af92-ede5356e0f6a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:02.829602 master-0 kubenswrapper[33867]: I0219 03:39:02.829441 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dea9bec-6b7f-4852-8aa2-13c0f5a5c45c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b8e43cb7-aa38-4649-90b5-012a8b1554e4\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.829602 master-0 kubenswrapper[33867]: I0219 03:39:02.829509 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.829602 master-0 kubenswrapper[33867]: I0219 03:39:02.829562 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.829602 master-0 kubenswrapper[33867]: I0219 03:39:02.829611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.830031 master-0 kubenswrapper[33867]: I0219 03:39:02.829631 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.830031 master-0 kubenswrapper[33867]: I0219 03:39:02.829660 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.830031 master-0 kubenswrapper[33867]: I0219 03:39:02.829708 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-config\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.830031 master-0 kubenswrapper[33867]: I0219 03:39:02.829756 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48v79\" (UniqueName: \"kubernetes.io/projected/467115dc-5bd5-496c-87cb-a0c278e45a72-kube-api-access-48v79\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.931831 master-0 kubenswrapper[33867]: I0219 03:39:02.931757 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.931831 master-0 kubenswrapper[33867]: I0219 03:39:02.931846 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.932321 master-0 kubenswrapper[33867]: I0219 03:39:02.931892 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.932321 master-0 kubenswrapper[33867]: I0219 03:39:02.931914 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.932321 master-0 kubenswrapper[33867]: I0219 03:39:02.931939 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.932321 master-0 kubenswrapper[33867]: I0219 03:39:02.931977 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-config\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.932321 master-0 kubenswrapper[33867]: I0219 03:39:02.932013 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48v79\" (UniqueName: \"kubernetes.io/projected/467115dc-5bd5-496c-87cb-a0c278e45a72-kube-api-access-48v79\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.933772 master-0 kubenswrapper[33867]: I0219 03:39:02.933711 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.935650 master-0 kubenswrapper[33867]: I0219 03:39:02.934881 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.935650 master-0 kubenswrapper[33867]: I0219 03:39:02.935146 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467115dc-5bd5-496c-87cb-a0c278e45a72-config\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.936243 master-0 kubenswrapper[33867]: I0219 03:39:02.936208 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.936413 master-0 kubenswrapper[33867]: I0219 03:39:02.936378 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.936834 master-0 kubenswrapper[33867]: I0219 03:39:02.936803 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/467115dc-5bd5-496c-87cb-a0c278e45a72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:02.972151 master-0 kubenswrapper[33867]: I0219 03:39:02.972092 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:03.239053 master-0 kubenswrapper[33867]: I0219 03:39:03.238922 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dea9bec-6b7f-4852-8aa2-13c0f5a5c45c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b8e43cb7-aa38-4649-90b5-012a8b1554e4\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:03.240948 master-0 kubenswrapper[33867]: I0219 03:39:03.240905 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:39:03.240948 master-0 kubenswrapper[33867]: I0219 03:39:03.240942 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dea9bec-6b7f-4852-8aa2-13c0f5a5c45c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b8e43cb7-aa38-4649-90b5-012a8b1554e4\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/06dc97518a08c9af8dc032f9abe5f7fed6440f11817a75151fdc1eb2569527b7/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:03.349477 master-0 kubenswrapper[33867]: I0219 03:39:03.349407 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48v79\" (UniqueName: \"kubernetes.io/projected/467115dc-5bd5-496c-87cb-a0c278e45a72-kube-api-access-48v79\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:04.633371 master-0 kubenswrapper[33867]: I0219 03:39:04.632098 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dea9bec-6b7f-4852-8aa2-13c0f5a5c45c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b8e43cb7-aa38-4649-90b5-012a8b1554e4\") pod \"ovsdbserver-sb-0\" (UID: \"467115dc-5bd5-496c-87cb-a0c278e45a72\") " pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:04.971438 master-0 kubenswrapper[33867]: I0219 03:39:04.971320 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:08.609883 master-0 kubenswrapper[33867]: I0219 03:39:08.609716 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96jnp"] Feb 19 03:39:09.675910 master-0 kubenswrapper[33867]: I0219 03:39:09.675827 33867 generic.go:334] "Generic (PLEG): container finished" podID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerID="a11e1ae39fd27f1daa3cc654075b0a863b10b3acbfd8f5961b70ee27d775355a" exitCode=0 Feb 19 03:39:09.675910 master-0 kubenswrapper[33867]: I0219 03:39:09.675901 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" event={"ID":"b1659bdb-92e9-4f41-b10a-552e4a31af0b","Type":"ContainerDied","Data":"a11e1ae39fd27f1daa3cc654075b0a863b10b3acbfd8f5961b70ee27d775355a"} Feb 19 03:39:09.679933 master-0 kubenswrapper[33867]: I0219 03:39:09.679605 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb7d7589-8708-4f52-8e83-f9a47aeb438a","Type":"ContainerStarted","Data":"fc5639e024a838c3631308af2b1aba0a96119d17c4c15a1b2a9386ff1cf936c3"} Feb 19 03:39:09.679933 master-0 kubenswrapper[33867]: I0219 03:39:09.679722 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 03:39:09.681914 master-0 kubenswrapper[33867]: I0219 03:39:09.681864 33867 generic.go:334] "Generic (PLEG): container finished" podID="5688ca74-8693-4449-87e8-62145a078d1c" containerID="5d0d4eaec42215a33572c4802f22e3c1023a43be9e0b1f3661067aa36eee47c6" exitCode=0 Feb 19 03:39:09.682308 master-0 kubenswrapper[33867]: I0219 03:39:09.681956 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" event={"ID":"5688ca74-8693-4449-87e8-62145a078d1c","Type":"ContainerDied","Data":"5d0d4eaec42215a33572c4802f22e3c1023a43be9e0b1f3661067aa36eee47c6"} Feb 19 03:39:09.689690 master-0 kubenswrapper[33867]: I0219 03:39:09.685171 33867 generic.go:334] "Generic (PLEG): container finished" podID="28dc950c-b6dc-4720-bac2-555217e06bb3" containerID="d0d9f385ac5afcc4494f0497e446c4bce06654f36a2740172e50f8dfc862494b" exitCode=0 Feb 19 03:39:09.689690 master-0 kubenswrapper[33867]: I0219 03:39:09.685308 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-58qg9" event={"ID":"28dc950c-b6dc-4720-bac2-555217e06bb3","Type":"ContainerDied","Data":"d0d9f385ac5afcc4494f0497e446c4bce06654f36a2740172e50f8dfc862494b"} Feb 19 03:39:09.695660 master-0 kubenswrapper[33867]: I0219 03:39:09.695443 33867 generic.go:334] "Generic (PLEG): container finished" podID="2e353061-bb90-4da1-b260-2a16e7d06a93" containerID="27fcb8231a1f342315f201539b7e04b940eaf9e6ea040c80551f72dd30495a90" exitCode=0 Feb 19 03:39:09.695660 master-0 kubenswrapper[33867]: I0219 03:39:09.695582 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" event={"ID":"2e353061-bb90-4da1-b260-2a16e7d06a93","Type":"ContainerDied","Data":"27fcb8231a1f342315f201539b7e04b940eaf9e6ea040c80551f72dd30495a90"} Feb 19 03:39:09.700957 master-0 kubenswrapper[33867]: I0219 03:39:09.700850 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96jnp" event={"ID":"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd","Type":"ContainerStarted","Data":"901eb061666b7889bca49a0d16c8a47c2e90730895af42cbe8eb63c06aba8590"} Feb 19 03:39:09.871787 master-0 kubenswrapper[33867]: I0219 03:39:09.854384 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 03:39:09.871787 master-0 kubenswrapper[33867]: I0219 03:39:09.871470 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 03:39:09.894239 master-0 kubenswrapper[33867]: I0219 03:39:09.892809 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.670931123 podStartE2EDuration="18.892788471s" podCreationTimestamp="2026-02-19 03:38:51 +0000 UTC" firstStartedPulling="2026-02-19 03:38:57.847457917 +0000 UTC m=+943.144128528" lastFinishedPulling="2026-02-19 03:39:09.069315265 +0000 UTC m=+954.365985876" observedRunningTime="2026-02-19 03:39:09.832536015 +0000 UTC m=+955.129206626" watchObservedRunningTime="2026-02-19 03:39:09.892788471 +0000 UTC m=+955.189459082" Feb 19 03:39:09.977856 master-0 kubenswrapper[33867]: E0219 03:39:09.977800 33867 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 03:39:09.977856 master-0 kubenswrapper[33867]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5688ca74-8693-4449-87e8-62145a078d1c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 03:39:09.977856 master-0 kubenswrapper[33867]: > podSandboxID="6238490e83b44e37563b3d17a6e6eb925eec30d4233c88c3a261bd0ad6e8c4a3" Feb 19 03:39:09.978018 master-0 kubenswrapper[33867]: E0219 03:39:09.977997 33867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 03:39:09.978018 master-0 kubenswrapper[33867]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6j47r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bcd98d69f-vxzzp_openstack(5688ca74-8693-4449-87e8-62145a078d1c): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5688ca74-8693-4449-87e8-62145a078d1c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 03:39:09.978018 master-0 kubenswrapper[33867]: > logger="UnhandledError" Feb 19 03:39:09.979356 master-0 kubenswrapper[33867]: E0219 03:39:09.979307 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5688ca74-8693-4449-87e8-62145a078d1c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" podUID="5688ca74-8693-4449-87e8-62145a078d1c" Feb 19 03:39:10.117684 master-0 kubenswrapper[33867]: W0219 03:39:10.117626 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cbb39c7_0295_4f1e_99d1_d9bea8ea45a0.slice/crio-7e430344ae76fda938db1373355ad7b3c3395cb665f575b067f5ad50e760dab3 WatchSource:0}: Error finding container 7e430344ae76fda938db1373355ad7b3c3395cb665f575b067f5ad50e760dab3: Status 404 returned error can't find the container with id 7e430344ae76fda938db1373355ad7b3c3395cb665f575b067f5ad50e760dab3 Feb 19 03:39:10.142640 master-0 kubenswrapper[33867]: I0219 03:39:10.142544 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfn5s"] Feb 19 03:39:10.333368 master-0 kubenswrapper[33867]: I0219 03:39:10.333286 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 03:39:10.528314 master-0 kubenswrapper[33867]: I0219 03:39:10.528233 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 03:39:10.574586 master-0 kubenswrapper[33867]: I0219 03:39:10.574474 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 03:39:10.626383 master-0 kubenswrapper[33867]: I0219 03:39:10.626334 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 03:39:10.663726 master-0 kubenswrapper[33867]: I0219 03:39:10.663667 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:39:10.678777 master-0 kubenswrapper[33867]: I0219 03:39:10.678708 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:39:10.755281 master-0 kubenswrapper[33867]: I0219 03:39:10.755191 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1","Type":"ContainerStarted","Data":"c0a0ded4c2a5f179f1001828ba02cd2054d6fae1e5710da4070969c8252ac0bc"} Feb 19 03:39:10.757051 master-0 kubenswrapper[33867]: I0219 03:39:10.757023 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" event={"ID":"2e353061-bb90-4da1-b260-2a16e7d06a93","Type":"ContainerDied","Data":"5ac2d13ee43d24d4d123baac270d4525bdcd1a08eb7eacaa55423d5e851484cd"} Feb 19 03:39:10.757126 master-0 kubenswrapper[33867]: I0219 03:39:10.757065 33867 scope.go:117] "RemoveContainer" containerID="27fcb8231a1f342315f201539b7e04b940eaf9e6ea040c80551f72dd30495a90" Feb 19 03:39:10.757216 master-0 kubenswrapper[33867]: I0219 03:39:10.757183 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-clxsg" Feb 19 03:39:10.761612 master-0 kubenswrapper[33867]: I0219 03:39:10.761562 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"467115dc-5bd5-496c-87cb-a0c278e45a72","Type":"ContainerStarted","Data":"8a7f2320d2b0768fc602a031e4be6d6e7708dae1329777286c3e96a596915d49"} Feb 19 03:39:10.763887 master-0 kubenswrapper[33867]: I0219 03:39:10.763801 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfn5s" event={"ID":"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0","Type":"ContainerStarted","Data":"7e430344ae76fda938db1373355ad7b3c3395cb665f575b067f5ad50e760dab3"} Feb 19 03:39:10.767289 master-0 kubenswrapper[33867]: I0219 03:39:10.767211 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" event={"ID":"b1659bdb-92e9-4f41-b10a-552e4a31af0b","Type":"ContainerStarted","Data":"f9ffa9944e9cd1e270175a10b48f17edbdd5d250201c85025190a56fb1817218"} Feb 19 03:39:10.767472 master-0 kubenswrapper[33867]: I0219 03:39:10.767404 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:39:10.771544 master-0 kubenswrapper[33867]: I0219 03:39:10.771465 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d16fae78-0a83-4085-a9b5-896938c7d1b3","Type":"ContainerStarted","Data":"fba57d8524f224b323508714faea9eb1eac6994e35758d703d1f7a3d481fdf93"} Feb 19 03:39:10.777027 master-0 kubenswrapper[33867]: I0219 03:39:10.776970 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-58qg9" Feb 19 03:39:10.778215 master-0 kubenswrapper[33867]: I0219 03:39:10.777745 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-58qg9" event={"ID":"28dc950c-b6dc-4720-bac2-555217e06bb3","Type":"ContainerDied","Data":"851217c182751b4096b27b73b7ada47d387bcc65ff6c429969bb49cd0c92338e"} Feb 19 03:39:10.778473 master-0 kubenswrapper[33867]: I0219 03:39:10.778284 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc\") pod \"28dc950c-b6dc-4720-bac2-555217e06bb3\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " Feb 19 03:39:10.778473 master-0 kubenswrapper[33867]: I0219 03:39:10.778445 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q\") pod \"2e353061-bb90-4da1-b260-2a16e7d06a93\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " Feb 19 03:39:10.778565 master-0 kubenswrapper[33867]: I0219 03:39:10.778539 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config\") pod \"28dc950c-b6dc-4720-bac2-555217e06bb3\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " Feb 19 03:39:10.778612 master-0 kubenswrapper[33867]: I0219 03:39:10.778588 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gckp9\" (UniqueName: \"kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9\") pod \"28dc950c-b6dc-4720-bac2-555217e06bb3\" (UID: \"28dc950c-b6dc-4720-bac2-555217e06bb3\") " Feb 19 03:39:10.778658 master-0 kubenswrapper[33867]: I0219 03:39:10.778622 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config\") pod \"2e353061-bb90-4da1-b260-2a16e7d06a93\" (UID: \"2e353061-bb90-4da1-b260-2a16e7d06a93\") " Feb 19 03:39:10.783772 master-0 kubenswrapper[33867]: I0219 03:39:10.783709 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b","Type":"ContainerStarted","Data":"2adc74e2856726f1f7fcd22c4881400e7c6a83d322190d412e9ab81b2c3f8b38"} Feb 19 03:39:10.784854 master-0 kubenswrapper[33867]: I0219 03:39:10.784790 33867 scope.go:117] "RemoveContainer" containerID="d0d9f385ac5afcc4494f0497e446c4bce06654f36a2740172e50f8dfc862494b" Feb 19 03:39:10.786144 master-0 kubenswrapper[33867]: I0219 03:39:10.786005 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79fcdad5-1265-4636-af92-ede5356e0f6a","Type":"ContainerStarted","Data":"a2236bb0e182f8b99d4c59bffc85bed5b8d2013d3deed3ab1171f55497bcfa94"} Feb 19 03:39:10.789729 master-0 kubenswrapper[33867]: I0219 03:39:10.789651 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9" (OuterVolumeSpecName: "kube-api-access-gckp9") pod "28dc950c-b6dc-4720-bac2-555217e06bb3" (UID: "28dc950c-b6dc-4720-bac2-555217e06bb3"). InnerVolumeSpecName "kube-api-access-gckp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:10.797397 master-0 kubenswrapper[33867]: I0219 03:39:10.796669 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q" (OuterVolumeSpecName: "kube-api-access-q9r9q") pod "2e353061-bb90-4da1-b260-2a16e7d06a93" (UID: "2e353061-bb90-4da1-b260-2a16e7d06a93"). InnerVolumeSpecName "kube-api-access-q9r9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:10.797397 master-0 kubenswrapper[33867]: I0219 03:39:10.796860 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2176305-52ee-4689-a5f6-1aea00a75d4f","Type":"ContainerStarted","Data":"754e49997a225b1d0c359ddf2ce1f7555399754c3fe61faaa62ac9a8f2fdac29"} Feb 19 03:39:10.802880 master-0 kubenswrapper[33867]: I0219 03:39:10.802796 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" podStartSLOduration=3.378326025 podStartE2EDuration="23.802770226s" podCreationTimestamp="2026-02-19 03:38:47 +0000 UTC" firstStartedPulling="2026-02-19 03:38:48.786052041 +0000 UTC m=+934.082722652" lastFinishedPulling="2026-02-19 03:39:09.210496242 +0000 UTC m=+954.507166853" observedRunningTime="2026-02-19 03:39:10.797527698 +0000 UTC m=+956.094198309" watchObservedRunningTime="2026-02-19 03:39:10.802770226 +0000 UTC m=+956.099440837" Feb 19 03:39:10.815887 master-0 kubenswrapper[33867]: I0219 03:39:10.815834 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config" (OuterVolumeSpecName: "config") pod "28dc950c-b6dc-4720-bac2-555217e06bb3" (UID: "28dc950c-b6dc-4720-bac2-555217e06bb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:10.822959 master-0 kubenswrapper[33867]: I0219 03:39:10.822904 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config" (OuterVolumeSpecName: "config") pod "2e353061-bb90-4da1-b260-2a16e7d06a93" (UID: "2e353061-bb90-4da1-b260-2a16e7d06a93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:10.831529 master-0 kubenswrapper[33867]: I0219 03:39:10.831364 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28dc950c-b6dc-4720-bac2-555217e06bb3" (UID: "28dc950c-b6dc-4720-bac2-555217e06bb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:10.883236 master-0 kubenswrapper[33867]: I0219 03:39:10.883180 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:10.883236 master-0 kubenswrapper[33867]: I0219 03:39:10.883231 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gckp9\" (UniqueName: \"kubernetes.io/projected/28dc950c-b6dc-4720-bac2-555217e06bb3-kube-api-access-gckp9\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:10.883236 master-0 kubenswrapper[33867]: I0219 03:39:10.883247 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e353061-bb90-4da1-b260-2a16e7d06a93-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:10.883236 master-0 kubenswrapper[33867]: I0219 03:39:10.883276 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28dc950c-b6dc-4720-bac2-555217e06bb3-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:10.883236 master-0 kubenswrapper[33867]: I0219 03:39:10.883290 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/2e353061-bb90-4da1-b260-2a16e7d06a93-kube-api-access-q9r9q\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.010761 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ghz27"] Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: E0219 03:39:11.011370 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dc950c-b6dc-4720-bac2-555217e06bb3" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.011387 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dc950c-b6dc-4720-bac2-555217e06bb3" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: E0219 03:39:11.011414 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e353061-bb90-4da1-b260-2a16e7d06a93" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.011422 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e353061-bb90-4da1-b260-2a16e7d06a93" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.011727 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e353061-bb90-4da1-b260-2a16e7d06a93" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.011756 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dc950c-b6dc-4720-bac2-555217e06bb3" containerName="init" Feb 19 03:39:11.013399 master-0 kubenswrapper[33867]: I0219 03:39:11.012705 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.019271 master-0 kubenswrapper[33867]: I0219 03:39:11.019204 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 03:39:11.041759 master-0 kubenswrapper[33867]: I0219 03:39:11.041689 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ghz27"] Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.095819 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovs-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.095933 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-config\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.096021 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-combined-ca-bundle\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.096230 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24s7\" (UniqueName: \"kubernetes.io/projected/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-kube-api-access-m24s7\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.096354 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovn-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.098695 master-0 kubenswrapper[33867]: I0219 03:39:11.096593 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.199383 master-0 kubenswrapper[33867]: I0219 03:39:11.198423 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24s7\" (UniqueName: \"kubernetes.io/projected/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-kube-api-access-m24s7\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.199383 master-0 kubenswrapper[33867]: I0219 03:39:11.198520 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovn-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.210739 master-0 kubenswrapper[33867]: I0219 03:39:11.199244 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovn-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.210739 master-0 kubenswrapper[33867]: I0219 03:39:11.209250 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.210739 master-0 kubenswrapper[33867]: I0219 03:39:11.209574 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovs-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.210739 master-0 kubenswrapper[33867]: I0219 03:39:11.209633 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-config\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.210739 master-0 kubenswrapper[33867]: I0219 03:39:11.209698 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-combined-ca-bundle\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.211304 master-0 kubenswrapper[33867]: I0219 03:39:11.211233 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-ovs-rundir\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.228613 master-0 kubenswrapper[33867]: I0219 03:39:11.212211 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-config\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.255139 master-0 kubenswrapper[33867]: I0219 03:39:11.247106 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-combined-ca-bundle\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.269408 master-0 kubenswrapper[33867]: I0219 03:39:11.269023 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.270123 master-0 kubenswrapper[33867]: I0219 03:39:11.270083 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24s7\" (UniqueName: \"kubernetes.io/projected/bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e-kube-api-access-m24s7\") pod \"ovn-controller-metrics-ghz27\" (UID: \"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e\") " pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.293646 master-0 kubenswrapper[33867]: I0219 03:39:11.291724 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:39:11.323923 master-0 kubenswrapper[33867]: I0219 03:39:11.322944 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:39:11.384014 master-0 kubenswrapper[33867]: I0219 03:39:11.383852 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-clxsg"] Feb 19 03:39:11.431691 master-0 kubenswrapper[33867]: I0219 03:39:11.431393 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:11.434361 master-0 kubenswrapper[33867]: I0219 03:39:11.434319 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.454348 master-0 kubenswrapper[33867]: I0219 03:39:11.454250 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 03:39:11.462573 master-0 kubenswrapper[33867]: I0219 03:39:11.462531 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ghz27" Feb 19 03:39:11.467598 master-0 kubenswrapper[33867]: I0219 03:39:11.467531 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:39:11.476930 master-0 kubenswrapper[33867]: I0219 03:39:11.476881 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-58qg9"] Feb 19 03:39:11.488571 master-0 kubenswrapper[33867]: I0219 03:39:11.488484 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:11.523008 master-0 kubenswrapper[33867]: I0219 03:39:11.503749 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:39:11.535977 master-0 kubenswrapper[33867]: I0219 03:39:11.535892 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgq5x\" (UniqueName: \"kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.536219 master-0 kubenswrapper[33867]: I0219 03:39:11.536030 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.536219 master-0 kubenswrapper[33867]: I0219 03:39:11.536094 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.536219 master-0 kubenswrapper[33867]: I0219 03:39:11.536183 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.562287 master-0 kubenswrapper[33867]: I0219 03:39:11.562040 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:11.564609 master-0 kubenswrapper[33867]: I0219 03:39:11.564562 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.574273 master-0 kubenswrapper[33867]: I0219 03:39:11.572807 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 03:39:11.594402 master-0 kubenswrapper[33867]: I0219 03:39:11.594275 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.638401 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.638939 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639021 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2js\" (UniqueName: \"kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639430 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgq5x\" (UniqueName: \"kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639673 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639707 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639738 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.639802 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.641114 master-0 kubenswrapper[33867]: I0219 03:39:11.641043 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.642918 master-0 kubenswrapper[33867]: I0219 03:39:11.642854 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.645803 master-0 kubenswrapper[33867]: I0219 03:39:11.645737 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.698702 master-0 kubenswrapper[33867]: I0219 03:39:11.698249 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgq5x\" (UniqueName: \"kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x\") pod \"dnsmasq-dns-7c8cfc46bf-tkr48\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.741903 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.742062 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.742124 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2js\" (UniqueName: \"kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.742185 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.742224 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.743594 master-0 kubenswrapper[33867]: I0219 03:39:11.743521 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.744046 master-0 kubenswrapper[33867]: I0219 03:39:11.743836 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.745994 master-0 kubenswrapper[33867]: I0219 03:39:11.745906 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.746480 master-0 kubenswrapper[33867]: I0219 03:39:11.746221 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.764419 master-0 kubenswrapper[33867]: I0219 03:39:11.764355 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2js\" (UniqueName: \"kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js\") pod \"dnsmasq-dns-7b9694dd79-xt4j5\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:11.782293 master-0 kubenswrapper[33867]: I0219 03:39:11.781626 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:11.826884 master-0 kubenswrapper[33867]: I0219 03:39:11.826826 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" event={"ID":"5688ca74-8693-4449-87e8-62145a078d1c","Type":"ContainerStarted","Data":"056d94afc3f5182dc1cf9d83c45f45a8a5bd3a1543cf41e0761a58f3fa527f5c"} Feb 19 03:39:11.827101 master-0 kubenswrapper[33867]: I0219 03:39:11.827000 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="dnsmasq-dns" containerID="cri-o://056d94afc3f5182dc1cf9d83c45f45a8a5bd3a1543cf41e0761a58f3fa527f5c" gracePeriod=10 Feb 19 03:39:11.827150 master-0 kubenswrapper[33867]: I0219 03:39:11.827097 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:39:11.861987 master-0 kubenswrapper[33867]: I0219 03:39:11.861573 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" podStartSLOduration=4.273052148 podStartE2EDuration="24.861552424s" podCreationTimestamp="2026-02-19 03:38:47 +0000 UTC" firstStartedPulling="2026-02-19 03:38:48.444567982 +0000 UTC m=+933.741238593" lastFinishedPulling="2026-02-19 03:39:09.033068258 +0000 UTC m=+954.329738869" observedRunningTime="2026-02-19 03:39:11.85187429 +0000 UTC m=+957.148544911" watchObservedRunningTime="2026-02-19 03:39:11.861552424 +0000 UTC m=+957.158223035" Feb 19 03:39:11.905321 master-0 kubenswrapper[33867]: I0219 03:39:11.905131 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:12.098292 master-0 kubenswrapper[33867]: W0219 03:39:12.076956 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbab2bac_a2eb_4080_b1d7_bf9eb49dde8e.slice/crio-9265c4f1e7276d8f2ca3ce76cf06794440bf92d5cb6cfbd0da52a17859da6736 WatchSource:0}: Error finding container 9265c4f1e7276d8f2ca3ce76cf06794440bf92d5cb6cfbd0da52a17859da6736: Status 404 returned error can't find the container with id 9265c4f1e7276d8f2ca3ce76cf06794440bf92d5cb6cfbd0da52a17859da6736 Feb 19 03:39:12.098292 master-0 kubenswrapper[33867]: I0219 03:39:12.079759 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ghz27"] Feb 19 03:39:12.498768 master-0 kubenswrapper[33867]: I0219 03:39:12.498636 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:12.777739 master-0 kubenswrapper[33867]: I0219 03:39:12.777672 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:12.882758 master-0 kubenswrapper[33867]: I0219 03:39:12.882701 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ghz27" event={"ID":"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e","Type":"ContainerStarted","Data":"9265c4f1e7276d8f2ca3ce76cf06794440bf92d5cb6cfbd0da52a17859da6736"} Feb 19 03:39:12.886564 master-0 kubenswrapper[33867]: I0219 03:39:12.886502 33867 generic.go:334] "Generic (PLEG): container finished" podID="5688ca74-8693-4449-87e8-62145a078d1c" containerID="056d94afc3f5182dc1cf9d83c45f45a8a5bd3a1543cf41e0761a58f3fa527f5c" exitCode=0 Feb 19 03:39:12.886564 master-0 kubenswrapper[33867]: I0219 03:39:12.886578 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" event={"ID":"5688ca74-8693-4449-87e8-62145a078d1c","Type":"ContainerDied","Data":"056d94afc3f5182dc1cf9d83c45f45a8a5bd3a1543cf41e0761a58f3fa527f5c"} Feb 19 03:39:12.887402 master-0 kubenswrapper[33867]: I0219 03:39:12.887086 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="dnsmasq-dns" containerID="cri-o://f9ffa9944e9cd1e270175a10b48f17edbdd5d250201c85025190a56fb1817218" gracePeriod=10 Feb 19 03:39:12.971633 master-0 kubenswrapper[33867]: I0219 03:39:12.971562 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dc950c-b6dc-4720-bac2-555217e06bb3" path="/var/lib/kubelet/pods/28dc950c-b6dc-4720-bac2-555217e06bb3/volumes" Feb 19 03:39:12.972229 master-0 kubenswrapper[33867]: I0219 03:39:12.972195 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e353061-bb90-4da1-b260-2a16e7d06a93" path="/var/lib/kubelet/pods/2e353061-bb90-4da1-b260-2a16e7d06a93/volumes" Feb 19 03:39:13.905858 master-0 kubenswrapper[33867]: I0219 03:39:13.905792 33867 generic.go:334] "Generic (PLEG): container finished" podID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerID="f9ffa9944e9cd1e270175a10b48f17edbdd5d250201c85025190a56fb1817218" exitCode=0 Feb 19 03:39:13.905858 master-0 kubenswrapper[33867]: I0219 03:39:13.905860 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" event={"ID":"b1659bdb-92e9-4f41-b10a-552e4a31af0b","Type":"ContainerDied","Data":"f9ffa9944e9cd1e270175a10b48f17edbdd5d250201c85025190a56fb1817218"} Feb 19 03:39:14.413414 master-0 kubenswrapper[33867]: I0219 03:39:14.413302 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:39:14.571293 master-0 kubenswrapper[33867]: I0219 03:39:14.570518 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc\") pod \"5688ca74-8693-4449-87e8-62145a078d1c\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " Feb 19 03:39:14.571293 master-0 kubenswrapper[33867]: I0219 03:39:14.570610 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j47r\" (UniqueName: \"kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r\") pod \"5688ca74-8693-4449-87e8-62145a078d1c\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " Feb 19 03:39:14.571293 master-0 kubenswrapper[33867]: I0219 03:39:14.570706 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config\") pod \"5688ca74-8693-4449-87e8-62145a078d1c\" (UID: \"5688ca74-8693-4449-87e8-62145a078d1c\") " Feb 19 03:39:14.588810 master-0 kubenswrapper[33867]: I0219 03:39:14.588393 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r" (OuterVolumeSpecName: "kube-api-access-6j47r") pod "5688ca74-8693-4449-87e8-62145a078d1c" (UID: "5688ca74-8693-4449-87e8-62145a078d1c"). InnerVolumeSpecName "kube-api-access-6j47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:14.647555 master-0 kubenswrapper[33867]: I0219 03:39:14.644433 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5688ca74-8693-4449-87e8-62145a078d1c" (UID: "5688ca74-8693-4449-87e8-62145a078d1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:14.672750 master-0 kubenswrapper[33867]: I0219 03:39:14.672687 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:14.672750 master-0 kubenswrapper[33867]: I0219 03:39:14.672738 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j47r\" (UniqueName: \"kubernetes.io/projected/5688ca74-8693-4449-87e8-62145a078d1c-kube-api-access-6j47r\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:14.686018 master-0 kubenswrapper[33867]: I0219 03:39:14.685627 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config" (OuterVolumeSpecName: "config") pod "5688ca74-8693-4449-87e8-62145a078d1c" (UID: "5688ca74-8693-4449-87e8-62145a078d1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:14.759714 master-0 kubenswrapper[33867]: W0219 03:39:14.759432 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501bdb41_a315_4ed1_a41d_e51831b35ce0.slice/crio-18ccc9c6628735832f436f4c079bf45c1e7ce86cdf9663ba070cf8c2ecd0725e WatchSource:0}: Error finding container 18ccc9c6628735832f436f4c079bf45c1e7ce86cdf9663ba070cf8c2ecd0725e: Status 404 returned error can't find the container with id 18ccc9c6628735832f436f4c079bf45c1e7ce86cdf9663ba070cf8c2ecd0725e Feb 19 03:39:14.774345 master-0 kubenswrapper[33867]: I0219 03:39:14.774286 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688ca74-8693-4449-87e8-62145a078d1c-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:14.921636 master-0 kubenswrapper[33867]: I0219 03:39:14.921574 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" event={"ID":"501bdb41-a315-4ed1-a41d-e51831b35ce0","Type":"ContainerStarted","Data":"18ccc9c6628735832f436f4c079bf45c1e7ce86cdf9663ba070cf8c2ecd0725e"} Feb 19 03:39:14.924136 master-0 kubenswrapper[33867]: I0219 03:39:14.924100 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" event={"ID":"5688ca74-8693-4449-87e8-62145a078d1c","Type":"ContainerDied","Data":"6238490e83b44e37563b3d17a6e6eb925eec30d4233c88c3a261bd0ad6e8c4a3"} Feb 19 03:39:14.924210 master-0 kubenswrapper[33867]: I0219 03:39:14.924148 33867 scope.go:117] "RemoveContainer" containerID="056d94afc3f5182dc1cf9d83c45f45a8a5bd3a1543cf41e0761a58f3fa527f5c" Feb 19 03:39:14.924247 master-0 kubenswrapper[33867]: I0219 03:39:14.924199 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-vxzzp" Feb 19 03:39:14.994385 master-0 kubenswrapper[33867]: I0219 03:39:14.994118 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:39:15.016447 master-0 kubenswrapper[33867]: I0219 03:39:15.016371 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-vxzzp"] Feb 19 03:39:15.875353 master-0 kubenswrapper[33867]: W0219 03:39:15.867828 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf979b596_ca78_48f5_9293_10a51736d202.slice/crio-242f80d572aa94e5f9f58233683d1bf3fc3cbbaadc1c02913c93801538671002 WatchSource:0}: Error finding container 242f80d572aa94e5f9f58233683d1bf3fc3cbbaadc1c02913c93801538671002: Status 404 returned error can't find the container with id 242f80d572aa94e5f9f58233683d1bf3fc3cbbaadc1c02913c93801538671002 Feb 19 03:39:15.903488 master-0 kubenswrapper[33867]: I0219 03:39:15.903451 33867 scope.go:117] "RemoveContainer" containerID="5d0d4eaec42215a33572c4802f22e3c1023a43be9e0b1f3661067aa36eee47c6" Feb 19 03:39:15.935332 master-0 kubenswrapper[33867]: I0219 03:39:15.935273 33867 trace.go:236] Trace[1777251246]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (19-Feb-2026 03:39:14.933) (total time: 1001ms): Feb 19 03:39:15.935332 master-0 kubenswrapper[33867]: Trace[1777251246]: [1.001540558s] [1.001540558s] END Feb 19 03:39:15.936903 master-0 kubenswrapper[33867]: I0219 03:39:15.936865 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" event={"ID":"f979b596-ca78-48f5-9293-10a51736d202","Type":"ContainerStarted","Data":"242f80d572aa94e5f9f58233683d1bf3fc3cbbaadc1c02913c93801538671002"} Feb 19 03:39:16.056729 master-0 kubenswrapper[33867]: I0219 03:39:16.056691 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:39:16.110732 master-0 kubenswrapper[33867]: I0219 03:39:16.110621 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc\") pod \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " Feb 19 03:39:16.212611 master-0 kubenswrapper[33867]: I0219 03:39:16.212539 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config\") pod \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " Feb 19 03:39:16.212814 master-0 kubenswrapper[33867]: I0219 03:39:16.212628 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvd6v\" (UniqueName: \"kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v\") pod \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\" (UID: \"b1659bdb-92e9-4f41-b10a-552e4a31af0b\") " Feb 19 03:39:16.214722 master-0 kubenswrapper[33867]: I0219 03:39:16.214679 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1659bdb-92e9-4f41-b10a-552e4a31af0b" (UID: "b1659bdb-92e9-4f41-b10a-552e4a31af0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:16.218273 master-0 kubenswrapper[33867]: I0219 03:39:16.218219 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v" (OuterVolumeSpecName: "kube-api-access-dvd6v") pod "b1659bdb-92e9-4f41-b10a-552e4a31af0b" (UID: "b1659bdb-92e9-4f41-b10a-552e4a31af0b"). InnerVolumeSpecName "kube-api-access-dvd6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:16.272336 master-0 kubenswrapper[33867]: I0219 03:39:16.271731 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config" (OuterVolumeSpecName: "config") pod "b1659bdb-92e9-4f41-b10a-552e4a31af0b" (UID: "b1659bdb-92e9-4f41-b10a-552e4a31af0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:16.316033 master-0 kubenswrapper[33867]: I0219 03:39:16.315977 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:16.316033 master-0 kubenswrapper[33867]: I0219 03:39:16.316035 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvd6v\" (UniqueName: \"kubernetes.io/projected/b1659bdb-92e9-4f41-b10a-552e4a31af0b-kube-api-access-dvd6v\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:16.316314 master-0 kubenswrapper[33867]: I0219 03:39:16.316050 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1659bdb-92e9-4f41-b10a-552e4a31af0b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:16.949587 master-0 kubenswrapper[33867]: I0219 03:39:16.949512 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" event={"ID":"b1659bdb-92e9-4f41-b10a-552e4a31af0b","Type":"ContainerDied","Data":"3170c022299bf9920cb8ecba5643b3416ce846336b659ab6b7964df845a6f282"} Feb 19 03:39:16.950141 master-0 kubenswrapper[33867]: I0219 03:39:16.949632 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-vwbwn" Feb 19 03:39:16.985578 master-0 kubenswrapper[33867]: I0219 03:39:16.985510 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5688ca74-8693-4449-87e8-62145a078d1c" path="/var/lib/kubelet/pods/5688ca74-8693-4449-87e8-62145a078d1c/volumes" Feb 19 03:39:17.008859 master-0 kubenswrapper[33867]: I0219 03:39:17.008761 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:39:17.017917 master-0 kubenswrapper[33867]: I0219 03:39:17.016929 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-vwbwn"] Feb 19 03:39:17.378086 master-0 kubenswrapper[33867]: I0219 03:39:17.378006 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 03:39:18.974001 master-0 kubenswrapper[33867]: I0219 03:39:18.973932 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" path="/var/lib/kubelet/pods/b1659bdb-92e9-4f41-b10a-552e4a31af0b/volumes" Feb 19 03:39:19.099648 master-0 kubenswrapper[33867]: I0219 03:39:19.098198 33867 scope.go:117] "RemoveContainer" containerID="f9ffa9944e9cd1e270175a10b48f17edbdd5d250201c85025190a56fb1817218" Feb 19 03:39:19.383681 master-0 kubenswrapper[33867]: I0219 03:39:19.383607 33867 scope.go:117] "RemoveContainer" containerID="a11e1ae39fd27f1daa3cc654075b0a863b10b3acbfd8f5961b70ee27d775355a" Feb 19 03:39:21.046279 master-0 kubenswrapper[33867]: I0219 03:39:21.045932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96jnp" event={"ID":"c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd","Type":"ContainerStarted","Data":"ff85acc01774f1361f4b1a76d4fa5bcd941260d9e5b91c212d55afa04b84cfa2"} Feb 19 03:39:21.050272 master-0 kubenswrapper[33867]: I0219 03:39:21.047072 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-96jnp" Feb 19 03:39:21.054276 master-0 kubenswrapper[33867]: I0219 03:39:21.051943 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"467115dc-5bd5-496c-87cb-a0c278e45a72","Type":"ContainerStarted","Data":"09dd0b2cfd71cee7d0363a87585e6075826fc544b9a5c270e21dc9354ff8ce4b"} Feb 19 03:39:21.062299 master-0 kubenswrapper[33867]: I0219 03:39:21.060584 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfn5s" event={"ID":"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0","Type":"ContainerStarted","Data":"d8ce76dcc961e496f77780ef0bdcfafd821099fa0361d000f57ea4c6dabfe652"} Feb 19 03:39:21.083285 master-0 kubenswrapper[33867]: I0219 03:39:21.082501 33867 generic.go:334] "Generic (PLEG): container finished" podID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerID="526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6" exitCode=0 Feb 19 03:39:21.083285 master-0 kubenswrapper[33867]: I0219 03:39:21.082582 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" event={"ID":"501bdb41-a315-4ed1-a41d-e51831b35ce0","Type":"ContainerDied","Data":"526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6"} Feb 19 03:39:21.087276 master-0 kubenswrapper[33867]: I0219 03:39:21.083716 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79fcdad5-1265-4636-af92-ede5356e0f6a","Type":"ContainerStarted","Data":"d285c0ecba16f0a853e18632b75bb9d8497fd5c74d7b28e692e90334498ca2da"} Feb 19 03:39:21.087276 master-0 kubenswrapper[33867]: I0219 03:39:21.084607 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ghz27" event={"ID":"bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e","Type":"ContainerStarted","Data":"52314a8ecd7e9b71ea8975231c3632f47d8d51453ffc0b29950c06204125bbae"} Feb 19 03:39:21.087276 master-0 kubenswrapper[33867]: I0219 03:39:21.086368 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2176305-52ee-4689-a5f6-1aea00a75d4f","Type":"ContainerStarted","Data":"1920db3f91c0c37e17df35f2b7e99cc42cc07daa75c4701514d60902eba0c850"} Feb 19 03:39:21.106583 master-0 kubenswrapper[33867]: I0219 03:39:21.100660 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1","Type":"ContainerStarted","Data":"df754d1cd91b0c7bab78a64e9838441144b25de53d14f5af35cd64f56ff5727f"} Feb 19 03:39:21.140283 master-0 kubenswrapper[33867]: I0219 03:39:21.136470 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96jnp" podStartSLOduration=13.571780901 podStartE2EDuration="23.136441746s" podCreationTimestamp="2026-02-19 03:38:58 +0000 UTC" firstStartedPulling="2026-02-19 03:39:08.963513609 +0000 UTC m=+954.260184220" lastFinishedPulling="2026-02-19 03:39:18.528174454 +0000 UTC m=+963.824845065" observedRunningTime="2026-02-19 03:39:21.081721247 +0000 UTC m=+966.378391858" watchObservedRunningTime="2026-02-19 03:39:21.136441746 +0000 UTC m=+966.433112357" Feb 19 03:39:21.140283 master-0 kubenswrapper[33867]: I0219 03:39:21.137981 33867 generic.go:334] "Generic (PLEG): container finished" podID="f979b596-ca78-48f5-9293-10a51736d202" containerID="db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5" exitCode=0 Feb 19 03:39:21.140283 master-0 kubenswrapper[33867]: I0219 03:39:21.138014 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" event={"ID":"f979b596-ca78-48f5-9293-10a51736d202","Type":"ContainerDied","Data":"db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5"} Feb 19 03:39:21.236573 master-0 kubenswrapper[33867]: I0219 03:39:21.236390 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ghz27" podStartSLOduration=3.826017498 podStartE2EDuration="11.236370476s" podCreationTimestamp="2026-02-19 03:39:10 +0000 UTC" firstStartedPulling="2026-02-19 03:39:12.089577381 +0000 UTC m=+957.386247992" lastFinishedPulling="2026-02-19 03:39:19.499930359 +0000 UTC m=+964.796600970" observedRunningTime="2026-02-19 03:39:21.235763018 +0000 UTC m=+966.532433619" watchObservedRunningTime="2026-02-19 03:39:21.236370476 +0000 UTC m=+966.533041087" Feb 19 03:39:22.154551 master-0 kubenswrapper[33867]: I0219 03:39:22.154486 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b","Type":"ContainerStarted","Data":"14d227c1daa3a5ad4bc81da11b350b9f4b380df91f4aeea0a0511804b126705b"} Feb 19 03:39:22.157441 master-0 kubenswrapper[33867]: I0219 03:39:22.157379 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79fcdad5-1265-4636-af92-ede5356e0f6a","Type":"ContainerStarted","Data":"2f5fd61701c2a758379b5e6ebc8b9c80383d462e4f56f5bd47a1f3043d2db2af"} Feb 19 03:39:22.160521 master-0 kubenswrapper[33867]: I0219 03:39:22.160458 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"467115dc-5bd5-496c-87cb-a0c278e45a72","Type":"ContainerStarted","Data":"60a0b91c2aa343ea722d8b91736d8fd21a6d511c1d2f03d88e568ac1872b9fda"} Feb 19 03:39:22.163742 master-0 kubenswrapper[33867]: I0219 03:39:22.163716 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" event={"ID":"f979b596-ca78-48f5-9293-10a51736d202","Type":"ContainerStarted","Data":"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6"} Feb 19 03:39:22.163890 master-0 kubenswrapper[33867]: I0219 03:39:22.163868 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:22.165933 master-0 kubenswrapper[33867]: I0219 03:39:22.165869 33867 generic.go:334] "Generic (PLEG): container finished" podID="8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0" containerID="d8ce76dcc961e496f77780ef0bdcfafd821099fa0361d000f57ea4c6dabfe652" exitCode=0 Feb 19 03:39:22.166314 master-0 kubenswrapper[33867]: I0219 03:39:22.165953 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfn5s" event={"ID":"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0","Type":"ContainerDied","Data":"d8ce76dcc961e496f77780ef0bdcfafd821099fa0361d000f57ea4c6dabfe652"} Feb 19 03:39:22.168475 master-0 kubenswrapper[33867]: I0219 03:39:22.168344 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d16fae78-0a83-4085-a9b5-896938c7d1b3","Type":"ContainerStarted","Data":"a244d1d6a2373213aaa4b7248f5173ce1d827aa0cde8130c6cf34da780377cb5"} Feb 19 03:39:22.177043 master-0 kubenswrapper[33867]: I0219 03:39:22.176979 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" event={"ID":"501bdb41-a315-4ed1-a41d-e51831b35ce0","Type":"ContainerStarted","Data":"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8"} Feb 19 03:39:22.233291 master-0 kubenswrapper[33867]: I0219 03:39:22.231527 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" podStartSLOduration=11.231511431 podStartE2EDuration="11.231511431s" podCreationTimestamp="2026-02-19 03:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:22.230173814 +0000 UTC m=+967.526844425" watchObservedRunningTime="2026-02-19 03:39:22.231511431 +0000 UTC m=+967.528182042" Feb 19 03:39:22.307233 master-0 kubenswrapper[33867]: I0219 03:39:22.307101 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" podStartSLOduration=11.307067821 podStartE2EDuration="11.307067821s" podCreationTimestamp="2026-02-19 03:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:22.300590847 +0000 UTC m=+967.597261488" watchObservedRunningTime="2026-02-19 03:39:22.307067821 +0000 UTC m=+967.603738432" Feb 19 03:39:22.360297 master-0 kubenswrapper[33867]: I0219 03:39:22.360118 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.67070473 podStartE2EDuration="22.360089832s" podCreationTimestamp="2026-02-19 03:39:00 +0000 UTC" firstStartedPulling="2026-02-19 03:39:10.650135295 +0000 UTC m=+955.946805896" lastFinishedPulling="2026-02-19 03:39:19.339520387 +0000 UTC m=+964.636190998" observedRunningTime="2026-02-19 03:39:22.352923059 +0000 UTC m=+967.649593670" watchObservedRunningTime="2026-02-19 03:39:22.360089832 +0000 UTC m=+967.656760443" Feb 19 03:39:22.399078 master-0 kubenswrapper[33867]: I0219 03:39:22.398747 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.629861904 podStartE2EDuration="25.398708656s" podCreationTimestamp="2026-02-19 03:38:57 +0000 UTC" firstStartedPulling="2026-02-19 03:39:10.61746983 +0000 UTC m=+955.914140441" lastFinishedPulling="2026-02-19 03:39:19.386316562 +0000 UTC m=+964.682987193" observedRunningTime="2026-02-19 03:39:22.398011926 +0000 UTC m=+967.694682547" watchObservedRunningTime="2026-02-19 03:39:22.398708656 +0000 UTC m=+967.695379267" Feb 19 03:39:22.972637 master-0 kubenswrapper[33867]: I0219 03:39:22.972544 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:22.972637 master-0 kubenswrapper[33867]: I0219 03:39:22.972619 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:23.199842 master-0 kubenswrapper[33867]: I0219 03:39:23.199759 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfn5s" event={"ID":"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0","Type":"ContainerStarted","Data":"003a98163782d74ae89e0363beeffffcda6e763c2aea02d17d5a4fb832445fd9"} Feb 19 03:39:23.199842 master-0 kubenswrapper[33867]: I0219 03:39:23.199830 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfn5s" event={"ID":"8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0","Type":"ContainerStarted","Data":"d1799af6b4eeb0b6eec63aac700ee57f0c223806c7386350760198e584ccf3a4"} Feb 19 03:39:23.201286 master-0 kubenswrapper[33867]: I0219 03:39:23.201217 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:23.314579 master-0 kubenswrapper[33867]: I0219 03:39:23.314461 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pfn5s" podStartSLOduration=16.334895156 podStartE2EDuration="25.314432814s" podCreationTimestamp="2026-02-19 03:38:58 +0000 UTC" firstStartedPulling="2026-02-19 03:39:10.120240691 +0000 UTC m=+955.416911302" lastFinishedPulling="2026-02-19 03:39:19.099778349 +0000 UTC m=+964.396448960" observedRunningTime="2026-02-19 03:39:23.31253333 +0000 UTC m=+968.609203961" watchObservedRunningTime="2026-02-19 03:39:23.314432814 +0000 UTC m=+968.611103425" Feb 19 03:39:23.972683 master-0 kubenswrapper[33867]: I0219 03:39:23.972582 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:24.026922 master-0 kubenswrapper[33867]: I0219 03:39:24.026863 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:24.088879 master-0 kubenswrapper[33867]: I0219 03:39:24.088778 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:39:24.089321 master-0 kubenswrapper[33867]: I0219 03:39:24.088897 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:39:24.985705 master-0 kubenswrapper[33867]: I0219 03:39:24.984688 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:25.110011 master-0 kubenswrapper[33867]: I0219 03:39:25.079901 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:25.144838 master-0 kubenswrapper[33867]: I0219 03:39:25.144767 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:39:25.145399 master-0 kubenswrapper[33867]: E0219 03:39:25.145362 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="init" Feb 19 03:39:25.145399 master-0 kubenswrapper[33867]: I0219 03:39:25.145388 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="init" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: E0219 03:39:25.145417 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="dnsmasq-dns" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: I0219 03:39:25.145425 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="dnsmasq-dns" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: E0219 03:39:25.145445 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="dnsmasq-dns" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: I0219 03:39:25.145454 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="dnsmasq-dns" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: E0219 03:39:25.145487 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="init" Feb 19 03:39:25.145517 master-0 kubenswrapper[33867]: I0219 03:39:25.145494 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="init" Feb 19 03:39:25.145816 master-0 kubenswrapper[33867]: I0219 03:39:25.145750 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1659bdb-92e9-4f41-b10a-552e4a31af0b" containerName="dnsmasq-dns" Feb 19 03:39:25.145816 master-0 kubenswrapper[33867]: I0219 03:39:25.145774 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5688ca74-8693-4449-87e8-62145a078d1c" containerName="dnsmasq-dns" Feb 19 03:39:25.147129 master-0 kubenswrapper[33867]: I0219 03:39:25.147095 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.160535 master-0 kubenswrapper[33867]: I0219 03:39:25.160111 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:39:25.250196 master-0 kubenswrapper[33867]: I0219 03:39:25.245064 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="dnsmasq-dns" containerID="cri-o://257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8" gracePeriod=10 Feb 19 03:39:25.303998 master-0 kubenswrapper[33867]: I0219 03:39:25.303924 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wxm\" (UniqueName: \"kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.304229 master-0 kubenswrapper[33867]: I0219 03:39:25.304096 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.304229 master-0 kubenswrapper[33867]: I0219 03:39:25.304137 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.304229 master-0 kubenswrapper[33867]: I0219 03:39:25.304197 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.304377 master-0 kubenswrapper[33867]: I0219 03:39:25.304300 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.322764 master-0 kubenswrapper[33867]: I0219 03:39:25.322509 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 03:39:25.419638 master-0 kubenswrapper[33867]: I0219 03:39:25.419595 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.419790 master-0 kubenswrapper[33867]: I0219 03:39:25.419776 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.419875 master-0 kubenswrapper[33867]: I0219 03:39:25.419861 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wxm\" (UniqueName: \"kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.420920 master-0 kubenswrapper[33867]: I0219 03:39:25.420855 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.421202 master-0 kubenswrapper[33867]: I0219 03:39:25.421147 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.422304 master-0 kubenswrapper[33867]: I0219 03:39:25.422236 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.423875 master-0 kubenswrapper[33867]: I0219 03:39:25.423692 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.423875 master-0 kubenswrapper[33867]: I0219 03:39:25.423741 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.426541 master-0 kubenswrapper[33867]: I0219 03:39:25.425848 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.456355 master-0 kubenswrapper[33867]: I0219 03:39:25.443853 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wxm\" (UniqueName: \"kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm\") pod \"dnsmasq-dns-6fd49994df-7zmsl\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:25.487296 master-0 kubenswrapper[33867]: I0219 03:39:25.486709 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:26.026925 master-0 kubenswrapper[33867]: I0219 03:39:26.026846 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:26.082604 master-0 kubenswrapper[33867]: I0219 03:39:26.082557 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:26.129644 master-0 kubenswrapper[33867]: I0219 03:39:26.127105 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:39:26.174851 master-0 kubenswrapper[33867]: I0219 03:39:26.174796 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb\") pod \"501bdb41-a315-4ed1-a41d-e51831b35ce0\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " Feb 19 03:39:26.175096 master-0 kubenswrapper[33867]: I0219 03:39:26.174873 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config\") pod \"501bdb41-a315-4ed1-a41d-e51831b35ce0\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " Feb 19 03:39:26.175096 master-0 kubenswrapper[33867]: I0219 03:39:26.174937 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc\") pod \"501bdb41-a315-4ed1-a41d-e51831b35ce0\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " Feb 19 03:39:26.175096 master-0 kubenswrapper[33867]: I0219 03:39:26.175005 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr2js\" (UniqueName: \"kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js\") pod \"501bdb41-a315-4ed1-a41d-e51831b35ce0\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " Feb 19 03:39:26.175096 master-0 kubenswrapper[33867]: I0219 03:39:26.175060 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb\") pod \"501bdb41-a315-4ed1-a41d-e51831b35ce0\" (UID: \"501bdb41-a315-4ed1-a41d-e51831b35ce0\") " Feb 19 03:39:26.178875 master-0 kubenswrapper[33867]: I0219 03:39:26.178818 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js" (OuterVolumeSpecName: "kube-api-access-fr2js") pod "501bdb41-a315-4ed1-a41d-e51831b35ce0" (UID: "501bdb41-a315-4ed1-a41d-e51831b35ce0"). InnerVolumeSpecName "kube-api-access-fr2js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:26.245960 master-0 kubenswrapper[33867]: I0219 03:39:26.245890 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "501bdb41-a315-4ed1-a41d-e51831b35ce0" (UID: "501bdb41-a315-4ed1-a41d-e51831b35ce0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:26.265433 master-0 kubenswrapper[33867]: I0219 03:39:26.262863 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "501bdb41-a315-4ed1-a41d-e51831b35ce0" (UID: "501bdb41-a315-4ed1-a41d-e51831b35ce0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:26.270516 master-0 kubenswrapper[33867]: I0219 03:39:26.270418 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "501bdb41-a315-4ed1-a41d-e51831b35ce0" (UID: "501bdb41-a315-4ed1-a41d-e51831b35ce0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:26.271515 master-0 kubenswrapper[33867]: I0219 03:39:26.270938 33867 generic.go:334] "Generic (PLEG): container finished" podID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerID="257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8" exitCode=0 Feb 19 03:39:26.271515 master-0 kubenswrapper[33867]: I0219 03:39:26.271021 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" event={"ID":"501bdb41-a315-4ed1-a41d-e51831b35ce0","Type":"ContainerDied","Data":"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8"} Feb 19 03:39:26.271515 master-0 kubenswrapper[33867]: I0219 03:39:26.271053 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" event={"ID":"501bdb41-a315-4ed1-a41d-e51831b35ce0","Type":"ContainerDied","Data":"18ccc9c6628735832f436f4c079bf45c1e7ce86cdf9663ba070cf8c2ecd0725e"} Feb 19 03:39:26.271515 master-0 kubenswrapper[33867]: I0219 03:39:26.271071 33867 scope.go:117] "RemoveContainer" containerID="257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8" Feb 19 03:39:26.271515 master-0 kubenswrapper[33867]: I0219 03:39:26.271224 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-xt4j5" Feb 19 03:39:26.277533 master-0 kubenswrapper[33867]: I0219 03:39:26.276936 33867 generic.go:334] "Generic (PLEG): container finished" podID="9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1" containerID="df754d1cd91b0c7bab78a64e9838441144b25de53d14f5af35cd64f56ff5727f" exitCode=0 Feb 19 03:39:26.277533 master-0 kubenswrapper[33867]: I0219 03:39:26.277041 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1","Type":"ContainerDied","Data":"df754d1cd91b0c7bab78a64e9838441144b25de53d14f5af35cd64f56ff5727f"} Feb 19 03:39:26.279505 master-0 kubenswrapper[33867]: I0219 03:39:26.279430 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:26.279505 master-0 kubenswrapper[33867]: I0219 03:39:26.279490 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:26.279880 master-0 kubenswrapper[33867]: I0219 03:39:26.279812 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" event={"ID":"bd02c363-1edd-4046-b242-331863944386","Type":"ContainerStarted","Data":"5d0c9e58262f93022ba33320d7d7cd4426dcb2649bc45a772080e006417c33a7"} Feb 19 03:39:26.279995 master-0 kubenswrapper[33867]: I0219 03:39:26.279898 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:26.280084 master-0 kubenswrapper[33867]: I0219 03:39:26.279999 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr2js\" (UniqueName: \"kubernetes.io/projected/501bdb41-a315-4ed1-a41d-e51831b35ce0-kube-api-access-fr2js\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:26.283396 master-0 kubenswrapper[33867]: I0219 03:39:26.283358 33867 generic.go:334] "Generic (PLEG): container finished" podID="d2176305-52ee-4689-a5f6-1aea00a75d4f" containerID="1920db3f91c0c37e17df35f2b7e99cc42cc07daa75c4701514d60902eba0c850" exitCode=0 Feb 19 03:39:26.283927 master-0 kubenswrapper[33867]: I0219 03:39:26.283596 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2176305-52ee-4689-a5f6-1aea00a75d4f","Type":"ContainerDied","Data":"1920db3f91c0c37e17df35f2b7e99cc42cc07daa75c4701514d60902eba0c850"} Feb 19 03:39:26.298149 master-0 kubenswrapper[33867]: I0219 03:39:26.298067 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config" (OuterVolumeSpecName: "config") pod "501bdb41-a315-4ed1-a41d-e51831b35ce0" (UID: "501bdb41-a315-4ed1-a41d-e51831b35ce0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:26.302923 master-0 kubenswrapper[33867]: I0219 03:39:26.302873 33867 scope.go:117] "RemoveContainer" containerID="526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6" Feb 19 03:39:26.358523 master-0 kubenswrapper[33867]: I0219 03:39:26.358472 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 03:39:26.359313 master-0 kubenswrapper[33867]: I0219 03:39:26.359282 33867 scope.go:117] "RemoveContainer" containerID="257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8" Feb 19 03:39:26.359800 master-0 kubenswrapper[33867]: E0219 03:39:26.359763 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8\": container with ID starting with 257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8 not found: ID does not exist" containerID="257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8" Feb 19 03:39:26.359850 master-0 kubenswrapper[33867]: I0219 03:39:26.359803 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8"} err="failed to get container status \"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8\": rpc error: code = NotFound desc = could not find container \"257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8\": container with ID starting with 257bb3fe54e61e1a07202384eb01725834bb95884e41975903135396d23eadd8 not found: ID does not exist" Feb 19 03:39:26.359850 master-0 kubenswrapper[33867]: I0219 03:39:26.359828 33867 scope.go:117] "RemoveContainer" containerID="526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6" Feb 19 03:39:26.360969 master-0 kubenswrapper[33867]: E0219 03:39:26.360917 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6\": container with ID starting with 526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6 not found: ID does not exist" containerID="526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6" Feb 19 03:39:26.361006 master-0 kubenswrapper[33867]: I0219 03:39:26.360963 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6"} err="failed to get container status \"526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6\": rpc error: code = NotFound desc = could not find container \"526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6\": container with ID starting with 526f3f0b7a44f2bc6ee50a735df32bf5bd3d72608d860923db01423e1f35e2a6 not found: ID does not exist" Feb 19 03:39:26.386975 master-0 kubenswrapper[33867]: I0219 03:39:26.386914 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501bdb41-a315-4ed1-a41d-e51831b35ce0-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:26.579102 master-0 kubenswrapper[33867]: I0219 03:39:26.578962 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 03:39:26.579986 master-0 kubenswrapper[33867]: E0219 03:39:26.579945 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="dnsmasq-dns" Feb 19 03:39:26.580110 master-0 kubenswrapper[33867]: I0219 03:39:26.579987 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="dnsmasq-dns" Feb 19 03:39:26.580110 master-0 kubenswrapper[33867]: E0219 03:39:26.580021 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="init" Feb 19 03:39:26.580110 master-0 kubenswrapper[33867]: I0219 03:39:26.580031 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="init" Feb 19 03:39:26.580793 master-0 kubenswrapper[33867]: I0219 03:39:26.580748 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" containerName="dnsmasq-dns" Feb 19 03:39:26.584912 master-0 kubenswrapper[33867]: I0219 03:39:26.584826 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 03:39:26.594988 master-0 kubenswrapper[33867]: I0219 03:39:26.589590 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 03:39:26.594988 master-0 kubenswrapper[33867]: I0219 03:39:26.589924 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 03:39:26.594988 master-0 kubenswrapper[33867]: I0219 03:39:26.590186 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 03:39:26.602122 master-0 kubenswrapper[33867]: I0219 03:39:26.600922 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 03:39:26.694158 master-0 kubenswrapper[33867]: I0219 03:39:26.694012 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697459 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-scripts\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697528 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpp5\" (UniqueName: \"kubernetes.io/projected/ad8bcfb7-310e-45ca-96a7-e12671866348-kube-api-access-hbpp5\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697654 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697687 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697715 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-config\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.698351 master-0 kubenswrapper[33867]: I0219 03:39:26.697794 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.703609 master-0 kubenswrapper[33867]: I0219 03:39:26.703487 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-xt4j5"] Feb 19 03:39:26.786383 master-0 kubenswrapper[33867]: I0219 03:39:26.784934 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:26.801243 master-0 kubenswrapper[33867]: I0219 03:39:26.800513 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.801486 master-0 kubenswrapper[33867]: I0219 03:39:26.801425 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.801530 master-0 kubenswrapper[33867]: I0219 03:39:26.801496 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-config\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.801835 master-0 kubenswrapper[33867]: I0219 03:39:26.801806 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.801879 master-0 kubenswrapper[33867]: I0219 03:39:26.801862 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-scripts\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.801920 master-0 kubenswrapper[33867]: I0219 03:39:26.801899 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpp5\" (UniqueName: \"kubernetes.io/projected/ad8bcfb7-310e-45ca-96a7-e12671866348-kube-api-access-hbpp5\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.802030 master-0 kubenswrapper[33867]: I0219 03:39:26.802007 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.803134 master-0 kubenswrapper[33867]: I0219 03:39:26.803043 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.803863 master-0 kubenswrapper[33867]: I0219 03:39:26.803811 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-scripts\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.804655 master-0 kubenswrapper[33867]: I0219 03:39:26.804584 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.804938 master-0 kubenswrapper[33867]: I0219 03:39:26.804896 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8bcfb7-310e-45ca-96a7-e12671866348-config\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.805677 master-0 kubenswrapper[33867]: I0219 03:39:26.805642 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.806139 master-0 kubenswrapper[33867]: I0219 03:39:26.806108 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8bcfb7-310e-45ca-96a7-e12671866348-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.828431 master-0 kubenswrapper[33867]: I0219 03:39:26.828343 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpp5\" (UniqueName: \"kubernetes.io/projected/ad8bcfb7-310e-45ca-96a7-e12671866348-kube-api-access-hbpp5\") pod \"ovn-northd-0\" (UID: \"ad8bcfb7-310e-45ca-96a7-e12671866348\") " pod="openstack/ovn-northd-0" Feb 19 03:39:26.970477 master-0 kubenswrapper[33867]: I0219 03:39:26.970390 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 03:39:26.980035 master-0 kubenswrapper[33867]: I0219 03:39:26.979560 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501bdb41-a315-4ed1-a41d-e51831b35ce0" path="/var/lib/kubelet/pods/501bdb41-a315-4ed1-a41d-e51831b35ce0/volumes" Feb 19 03:39:27.150245 master-0 kubenswrapper[33867]: I0219 03:39:27.150158 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 03:39:27.168516 master-0 kubenswrapper[33867]: I0219 03:39:27.168107 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 03:39:27.179489 master-0 kubenswrapper[33867]: I0219 03:39:27.178139 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 03:39:27.179489 master-0 kubenswrapper[33867]: I0219 03:39:27.178472 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 03:39:27.182547 master-0 kubenswrapper[33867]: I0219 03:39:27.178162 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 03:39:27.193154 master-0 kubenswrapper[33867]: I0219 03:39:27.193069 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 03:39:27.298083 master-0 kubenswrapper[33867]: I0219 03:39:27.296749 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1","Type":"ContainerStarted","Data":"9736a74a55e74e31cf2c0627ef0d5ceab95361e80d7b35e211ad65a5ed054511"} Feb 19 03:39:27.301123 master-0 kubenswrapper[33867]: I0219 03:39:27.300943 33867 generic.go:334] "Generic (PLEG): container finished" podID="bd02c363-1edd-4046-b242-331863944386" containerID="d756b0f46b6c19317b65eefd780b8236cdc5886ea324749e1c60f8bb385a1144" exitCode=0 Feb 19 03:39:27.301123 master-0 kubenswrapper[33867]: I0219 03:39:27.301018 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" event={"ID":"bd02c363-1edd-4046-b242-331863944386","Type":"ContainerDied","Data":"d756b0f46b6c19317b65eefd780b8236cdc5886ea324749e1c60f8bb385a1144"} Feb 19 03:39:27.313183 master-0 kubenswrapper[33867]: I0219 03:39:27.312231 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2176305-52ee-4689-a5f6-1aea00a75d4f","Type":"ContainerStarted","Data":"f9ff0c0a0aa77a7d7fbecf16b931a40bdca80664fe7cf2c0382fe61709a64e91"} Feb 19 03:39:27.324189 master-0 kubenswrapper[33867]: I0219 03:39:27.324123 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e44cdab-bf23-4c54-9a2b-560c54e2f301\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5f5a813f-a196-4184-a137-d113b682d8f4\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.324470 master-0 kubenswrapper[33867]: I0219 03:39:27.324216 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnc9h\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-kube-api-access-nnc9h\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.324470 master-0 kubenswrapper[33867]: I0219 03:39:27.324371 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-lock\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.324687 master-0 kubenswrapper[33867]: I0219 03:39:27.324653 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.324789 master-0 kubenswrapper[33867]: I0219 03:39:27.324693 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea865d8-841e-4326-9833-ee28b81c18e1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.324789 master-0 kubenswrapper[33867]: I0219 03:39:27.324771 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-cache\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.325954 master-0 kubenswrapper[33867]: I0219 03:39:27.325875 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.327657548 podStartE2EDuration="38.325843604s" podCreationTimestamp="2026-02-19 03:38:49 +0000 UTC" firstStartedPulling="2026-02-19 03:39:10.438296787 +0000 UTC m=+955.734967398" lastFinishedPulling="2026-02-19 03:39:19.436482843 +0000 UTC m=+964.733153454" observedRunningTime="2026-02-19 03:39:27.325197756 +0000 UTC m=+972.621868377" watchObservedRunningTime="2026-02-19 03:39:27.325843604 +0000 UTC m=+972.622514215" Feb 19 03:39:27.376351 master-0 kubenswrapper[33867]: I0219 03:39:27.376184 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.582662738 podStartE2EDuration="36.376138718s" podCreationTimestamp="2026-02-19 03:38:51 +0000 UTC" firstStartedPulling="2026-02-19 03:39:10.5468543 +0000 UTC m=+955.843524901" lastFinishedPulling="2026-02-19 03:39:19.34033027 +0000 UTC m=+964.637000881" observedRunningTime="2026-02-19 03:39:27.373882624 +0000 UTC m=+972.670553235" watchObservedRunningTime="2026-02-19 03:39:27.376138718 +0000 UTC m=+972.672809329" Feb 19 03:39:27.427069 master-0 kubenswrapper[33867]: I0219 03:39:27.426994 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.427069 master-0 kubenswrapper[33867]: E0219 03:39:27.427173 33867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 03:39:27.427069 master-0 kubenswrapper[33867]: E0219 03:39:27.427193 33867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 03:39:27.427069 master-0 kubenswrapper[33867]: E0219 03:39:27.427245 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift podName:aea865d8-841e-4326-9833-ee28b81c18e1 nodeName:}" failed. No retries permitted until 2026-02-19 03:39:27.927225334 +0000 UTC m=+973.223895945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift") pod "swift-storage-0" (UID: "aea865d8-841e-4326-9833-ee28b81c18e1") : configmap "swift-ring-files" not found Feb 19 03:39:27.427894 master-0 kubenswrapper[33867]: I0219 03:39:27.427567 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea865d8-841e-4326-9833-ee28b81c18e1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.427894 master-0 kubenswrapper[33867]: I0219 03:39:27.427731 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-cache\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.428112 master-0 kubenswrapper[33867]: I0219 03:39:27.428070 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e44cdab-bf23-4c54-9a2b-560c54e2f301\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5f5a813f-a196-4184-a137-d113b682d8f4\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.428201 master-0 kubenswrapper[33867]: I0219 03:39:27.428178 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnc9h\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-kube-api-access-nnc9h\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.428326 master-0 kubenswrapper[33867]: I0219 03:39:27.428294 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-lock\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.428793 master-0 kubenswrapper[33867]: I0219 03:39:27.428759 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-cache\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.430221 master-0 kubenswrapper[33867]: I0219 03:39:27.430149 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aea865d8-841e-4326-9833-ee28b81c18e1-lock\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.433031 master-0 kubenswrapper[33867]: I0219 03:39:27.432995 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:39:27.433136 master-0 kubenswrapper[33867]: I0219 03:39:27.433049 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e44cdab-bf23-4c54-9a2b-560c54e2f301\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5f5a813f-a196-4184-a137-d113b682d8f4\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/38bb7b6e44f968c7755145571058811f7225fbcc945f16a2ca323afad38d6c3f/globalmount\"" pod="openstack/swift-storage-0" Feb 19 03:39:27.433136 master-0 kubenswrapper[33867]: I0219 03:39:27.431568 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea865d8-841e-4326-9833-ee28b81c18e1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.449960 master-0 kubenswrapper[33867]: I0219 03:39:27.449074 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnc9h\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-kube-api-access-nnc9h\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.478285 master-0 kubenswrapper[33867]: I0219 03:39:27.477509 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 03:39:27.484765 master-0 kubenswrapper[33867]: W0219 03:39:27.484661 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8bcfb7_310e_45ca_96a7_e12671866348.slice/crio-ebbcc3baf601199018f34ce0369b3359e04df6881dc523aef6cb6e9d038f51bc WatchSource:0}: Error finding container ebbcc3baf601199018f34ce0369b3359e04df6881dc523aef6cb6e9d038f51bc: Status 404 returned error can't find the container with id ebbcc3baf601199018f34ce0369b3359e04df6881dc523aef6cb6e9d038f51bc Feb 19 03:39:27.947093 master-0 kubenswrapper[33867]: I0219 03:39:27.947028 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:27.947396 master-0 kubenswrapper[33867]: E0219 03:39:27.947356 33867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 03:39:27.947396 master-0 kubenswrapper[33867]: E0219 03:39:27.947399 33867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 03:39:27.947531 master-0 kubenswrapper[33867]: E0219 03:39:27.947469 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift podName:aea865d8-841e-4326-9833-ee28b81c18e1 nodeName:}" failed. No retries permitted until 2026-02-19 03:39:28.947446724 +0000 UTC m=+974.244117335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift") pod "swift-storage-0" (UID: "aea865d8-841e-4326-9833-ee28b81c18e1") : configmap "swift-ring-files" not found Feb 19 03:39:28.124169 master-0 kubenswrapper[33867]: I0219 03:39:28.124070 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xnwxz"] Feb 19 03:39:28.126434 master-0 kubenswrapper[33867]: I0219 03:39:28.126391 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.132355 master-0 kubenswrapper[33867]: I0219 03:39:28.130179 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 03:39:28.132355 master-0 kubenswrapper[33867]: I0219 03:39:28.130799 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 03:39:28.132355 master-0 kubenswrapper[33867]: I0219 03:39:28.130802 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 03:39:28.136412 master-0 kubenswrapper[33867]: I0219 03:39:28.136017 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xnwxz"] Feb 19 03:39:28.259272 master-0 kubenswrapper[33867]: I0219 03:39:28.259092 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.259813 master-0 kubenswrapper[33867]: I0219 03:39:28.259482 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjdp\" (UniqueName: \"kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.259813 master-0 kubenswrapper[33867]: I0219 03:39:28.259536 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.259813 master-0 kubenswrapper[33867]: I0219 03:39:28.259795 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.259946 master-0 kubenswrapper[33867]: I0219 03:39:28.259830 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.260619 master-0 kubenswrapper[33867]: I0219 03:39:28.260533 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.260791 master-0 kubenswrapper[33867]: I0219 03:39:28.260747 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.331164 master-0 kubenswrapper[33867]: I0219 03:39:28.331032 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" event={"ID":"bd02c363-1edd-4046-b242-331863944386","Type":"ContainerStarted","Data":"747861d5eb3cc4c6bd54d5a5145842ab4d375cd25b552340595eec0d2be13ebc"} Feb 19 03:39:28.333198 master-0 kubenswrapper[33867]: I0219 03:39:28.333137 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:28.336043 master-0 kubenswrapper[33867]: I0219 03:39:28.335900 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ad8bcfb7-310e-45ca-96a7-e12671866348","Type":"ContainerStarted","Data":"ebbcc3baf601199018f34ce0369b3359e04df6881dc523aef6cb6e9d038f51bc"} Feb 19 03:39:28.363297 master-0 kubenswrapper[33867]: I0219 03:39:28.363203 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363340 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363408 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363442 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363528 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjdp\" (UniqueName: \"kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363558 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.363738 master-0 kubenswrapper[33867]: I0219 03:39:28.363600 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.366930 master-0 kubenswrapper[33867]: I0219 03:39:28.366885 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.367352 master-0 kubenswrapper[33867]: I0219 03:39:28.367321 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.367461 master-0 kubenswrapper[33867]: I0219 03:39:28.367450 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.367988 master-0 kubenswrapper[33867]: I0219 03:39:28.367947 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.369531 master-0 kubenswrapper[33867]: I0219 03:39:28.369494 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.387363 master-0 kubenswrapper[33867]: I0219 03:39:28.372646 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.387363 master-0 kubenswrapper[33867]: I0219 03:39:28.376813 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" podStartSLOduration=3.376793881 podStartE2EDuration="3.376793881s" podCreationTimestamp="2026-02-19 03:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:28.366496729 +0000 UTC m=+973.663167340" watchObservedRunningTime="2026-02-19 03:39:28.376793881 +0000 UTC m=+973.673464482" Feb 19 03:39:28.409889 master-0 kubenswrapper[33867]: I0219 03:39:28.409833 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjdp\" (UniqueName: \"kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp\") pod \"swift-ring-rebalance-xnwxz\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.469666 master-0 kubenswrapper[33867]: I0219 03:39:28.469598 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:28.977970 master-0 kubenswrapper[33867]: I0219 03:39:28.977899 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:28.978360 master-0 kubenswrapper[33867]: E0219 03:39:28.978325 33867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 03:39:28.978360 master-0 kubenswrapper[33867]: E0219 03:39:28.978355 33867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 03:39:28.978449 master-0 kubenswrapper[33867]: E0219 03:39:28.978425 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift podName:aea865d8-841e-4326-9833-ee28b81c18e1 nodeName:}" failed. No retries permitted until 2026-02-19 03:39:30.978404914 +0000 UTC m=+976.275075525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift") pod "swift-storage-0" (UID: "aea865d8-841e-4326-9833-ee28b81c18e1") : configmap "swift-ring-files" not found Feb 19 03:39:29.033563 master-0 kubenswrapper[33867]: I0219 03:39:29.033422 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e44cdab-bf23-4c54-9a2b-560c54e2f301\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5f5a813f-a196-4184-a137-d113b682d8f4\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:29.263483 master-0 kubenswrapper[33867]: W0219 03:39:29.263418 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bdc624f_2b02_4f65_93e7_49b26b1da384.slice/crio-40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f WatchSource:0}: Error finding container 40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f: Status 404 returned error can't find the container with id 40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f Feb 19 03:39:29.265090 master-0 kubenswrapper[33867]: I0219 03:39:29.265022 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xnwxz"] Feb 19 03:39:29.349855 master-0 kubenswrapper[33867]: I0219 03:39:29.349780 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xnwxz" event={"ID":"6bdc624f-2b02-4f65-93e7-49b26b1da384","Type":"ContainerStarted","Data":"40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f"} Feb 19 03:39:29.353159 master-0 kubenswrapper[33867]: I0219 03:39:29.352848 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ad8bcfb7-310e-45ca-96a7-e12671866348","Type":"ContainerStarted","Data":"a59155f79593d89a3104f8d1e94213e100963d018194d1b45de35cb88b5bf503"} Feb 19 03:39:29.353159 master-0 kubenswrapper[33867]: I0219 03:39:29.352936 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ad8bcfb7-310e-45ca-96a7-e12671866348","Type":"ContainerStarted","Data":"fcae7e92a8bfb8b1113a4e389e7fe54c9a6ac5db867d42e7dfcaf1c701bfdbee"} Feb 19 03:39:29.385313 master-0 kubenswrapper[33867]: I0219 03:39:29.381912 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.123228271 podStartE2EDuration="3.381886079s" podCreationTimestamp="2026-02-19 03:39:26 +0000 UTC" firstStartedPulling="2026-02-19 03:39:27.489075076 +0000 UTC m=+972.785745687" lastFinishedPulling="2026-02-19 03:39:28.747732874 +0000 UTC m=+974.044403495" observedRunningTime="2026-02-19 03:39:29.380639913 +0000 UTC m=+974.677310524" watchObservedRunningTime="2026-02-19 03:39:29.381886079 +0000 UTC m=+974.678556690" Feb 19 03:39:30.362697 master-0 kubenswrapper[33867]: I0219 03:39:30.362654 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 03:39:30.569738 master-0 kubenswrapper[33867]: I0219 03:39:30.569604 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 03:39:30.570108 master-0 kubenswrapper[33867]: I0219 03:39:30.569841 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 03:39:31.032761 master-0 kubenswrapper[33867]: I0219 03:39:31.032679 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:31.033081 master-0 kubenswrapper[33867]: E0219 03:39:31.033001 33867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 03:39:31.033081 master-0 kubenswrapper[33867]: E0219 03:39:31.033079 33867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 03:39:31.033210 master-0 kubenswrapper[33867]: E0219 03:39:31.033182 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift podName:aea865d8-841e-4326-9833-ee28b81c18e1 nodeName:}" failed. No retries permitted until 2026-02-19 03:39:35.033150623 +0000 UTC m=+980.329821254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift") pod "swift-storage-0" (UID: "aea865d8-841e-4326-9833-ee28b81c18e1") : configmap "swift-ring-files" not found Feb 19 03:39:31.635289 master-0 kubenswrapper[33867]: I0219 03:39:31.635203 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:31.636357 master-0 kubenswrapper[33867]: I0219 03:39:31.635339 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:33.049687 master-0 kubenswrapper[33867]: I0219 03:39:33.049621 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 03:39:33.145384 master-0 kubenswrapper[33867]: I0219 03:39:33.145288 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 03:39:33.402857 master-0 kubenswrapper[33867]: I0219 03:39:33.401826 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xnwxz" event={"ID":"6bdc624f-2b02-4f65-93e7-49b26b1da384","Type":"ContainerStarted","Data":"0f56be12ab8653d1efd235eddcbfd8386d076dcc1423e6c7277149d5c9adf3b2"} Feb 19 03:39:33.572805 master-0 kubenswrapper[33867]: I0219 03:39:33.572699 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-xnwxz" podStartSLOduration=2.003699615 podStartE2EDuration="5.572679737s" podCreationTimestamp="2026-02-19 03:39:28 +0000 UTC" firstStartedPulling="2026-02-19 03:39:29.26858359 +0000 UTC m=+974.565254191" lastFinishedPulling="2026-02-19 03:39:32.837563692 +0000 UTC m=+978.134234313" observedRunningTime="2026-02-19 03:39:33.570807344 +0000 UTC m=+978.867477955" watchObservedRunningTime="2026-02-19 03:39:33.572679737 +0000 UTC m=+978.869350348" Feb 19 03:39:33.937972 master-0 kubenswrapper[33867]: I0219 03:39:33.937886 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:34.017783 master-0 kubenswrapper[33867]: I0219 03:39:34.017695 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fdbk4"] Feb 19 03:39:34.022165 master-0 kubenswrapper[33867]: I0219 03:39:34.022099 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.078376 master-0 kubenswrapper[33867]: I0219 03:39:34.069818 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 03:39:34.080381 master-0 kubenswrapper[33867]: I0219 03:39:34.080320 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3d8b-account-create-update-h4wh9"] Feb 19 03:39:34.085371 master-0 kubenswrapper[33867]: I0219 03:39:34.085352 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.085590 master-0 kubenswrapper[33867]: I0219 03:39:34.085519 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fdbk4"] Feb 19 03:39:34.097514 master-0 kubenswrapper[33867]: I0219 03:39:34.093722 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 03:39:34.131174 master-0 kubenswrapper[33867]: I0219 03:39:34.130532 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3d8b-account-create-update-h4wh9"] Feb 19 03:39:34.161644 master-0 kubenswrapper[33867]: I0219 03:39:34.161399 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.161644 master-0 kubenswrapper[33867]: I0219 03:39:34.161469 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.161644 master-0 kubenswrapper[33867]: I0219 03:39:34.161565 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9c7b\" (UniqueName: \"kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.161644 master-0 kubenswrapper[33867]: I0219 03:39:34.161611 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlsm\" (UniqueName: \"kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.231586 master-0 kubenswrapper[33867]: I0219 03:39:34.231482 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8a6d-account-create-update-2gsvr"] Feb 19 03:39:34.234953 master-0 kubenswrapper[33867]: I0219 03:39:34.234838 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.239437 master-0 kubenswrapper[33867]: I0219 03:39:34.239392 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 03:39:34.256539 master-0 kubenswrapper[33867]: I0219 03:39:34.256148 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8a6d-account-create-update-2gsvr"] Feb 19 03:39:34.274945 master-0 kubenswrapper[33867]: I0219 03:39:34.274834 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j9b2d"] Feb 19 03:39:34.276344 master-0 kubenswrapper[33867]: I0219 03:39:34.276308 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j9b2d"] Feb 19 03:39:34.276423 master-0 kubenswrapper[33867]: I0219 03:39:34.276397 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.318177 master-0 kubenswrapper[33867]: I0219 03:39:34.318089 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlsm\" (UniqueName: \"kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.318437 master-0 kubenswrapper[33867]: I0219 03:39:34.318276 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.318437 master-0 kubenswrapper[33867]: I0219 03:39:34.318336 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.318551 master-0 kubenswrapper[33867]: I0219 03:39:34.318524 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9c7b\" (UniqueName: \"kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.319861 master-0 kubenswrapper[33867]: I0219 03:39:34.319827 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.323520 master-0 kubenswrapper[33867]: I0219 03:39:34.322785 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.347309 master-0 kubenswrapper[33867]: I0219 03:39:34.345519 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9c7b\" (UniqueName: \"kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b\") pod \"keystone-db-create-fdbk4\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.347309 master-0 kubenswrapper[33867]: I0219 03:39:34.346755 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlsm\" (UniqueName: \"kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm\") pod \"keystone-3d8b-account-create-update-h4wh9\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.365522 master-0 kubenswrapper[33867]: I0219 03:39:34.365445 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:34.420826 master-0 kubenswrapper[33867]: I0219 03:39:34.420596 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.420826 master-0 kubenswrapper[33867]: I0219 03:39:34.420680 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpcb\" (UniqueName: \"kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.420826 master-0 kubenswrapper[33867]: I0219 03:39:34.420782 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45zs7\" (UniqueName: \"kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.421188 master-0 kubenswrapper[33867]: I0219 03:39:34.421065 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.455722 master-0 kubenswrapper[33867]: I0219 03:39:34.455611 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:34.532294 master-0 kubenswrapper[33867]: I0219 03:39:34.528616 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45zs7\" (UniqueName: \"kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.532294 master-0 kubenswrapper[33867]: I0219 03:39:34.528737 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.532294 master-0 kubenswrapper[33867]: I0219 03:39:34.529820 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.539074 master-0 kubenswrapper[33867]: I0219 03:39:34.537169 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.539074 master-0 kubenswrapper[33867]: I0219 03:39:34.537229 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqpcb\" (UniqueName: \"kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.540383 master-0 kubenswrapper[33867]: I0219 03:39:34.540334 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.566438 master-0 kubenswrapper[33867]: I0219 03:39:34.566381 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqpcb\" (UniqueName: \"kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb\") pod \"placement-8a6d-account-create-update-2gsvr\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.575473 master-0 kubenswrapper[33867]: I0219 03:39:34.575212 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45zs7\" (UniqueName: \"kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7\") pod \"placement-db-create-j9b2d\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.577089 master-0 kubenswrapper[33867]: I0219 03:39:34.577040 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:34.587008 master-0 kubenswrapper[33867]: I0219 03:39:34.586941 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:34.879289 master-0 kubenswrapper[33867]: I0219 03:39:34.875932 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fdbk4"] Feb 19 03:39:35.061629 master-0 kubenswrapper[33867]: I0219 03:39:35.057008 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:35.061629 master-0 kubenswrapper[33867]: E0219 03:39:35.057432 33867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 03:39:35.061629 master-0 kubenswrapper[33867]: E0219 03:39:35.057451 33867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 03:39:35.061629 master-0 kubenswrapper[33867]: E0219 03:39:35.057502 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift podName:aea865d8-841e-4326-9833-ee28b81c18e1 nodeName:}" failed. No retries permitted until 2026-02-19 03:39:43.057483838 +0000 UTC m=+988.354154449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift") pod "swift-storage-0" (UID: "aea865d8-841e-4326-9833-ee28b81c18e1") : configmap "swift-ring-files" not found Feb 19 03:39:35.098554 master-0 kubenswrapper[33867]: I0219 03:39:35.098474 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3d8b-account-create-update-h4wh9"] Feb 19 03:39:35.100383 master-0 kubenswrapper[33867]: W0219 03:39:35.099968 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb938784f_b544_4020_a421_1d886966170c.slice/crio-54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519 WatchSource:0}: Error finding container 54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519: Status 404 returned error can't find the container with id 54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519 Feb 19 03:39:35.202580 master-0 kubenswrapper[33867]: I0219 03:39:35.202445 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8a6d-account-create-update-2gsvr"] Feb 19 03:39:35.223437 master-0 kubenswrapper[33867]: I0219 03:39:35.222401 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j9b2d"] Feb 19 03:39:35.252828 master-0 kubenswrapper[33867]: W0219 03:39:35.252763 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe3535d_e926_4941_ac29_a9af927e1fd9.slice/crio-0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7 WatchSource:0}: Error finding container 0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7: Status 404 returned error can't find the container with id 0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7 Feb 19 03:39:35.432624 master-0 kubenswrapper[33867]: I0219 03:39:35.432569 33867 generic.go:334] "Generic (PLEG): container finished" podID="28d69938-9e32-4f94-afcd-db24ad9fde34" containerID="eb9f0acfaeaed9258806140fb6ca98f4342cb34f558dcb95cd790bccb2aa1683" exitCode=0 Feb 19 03:39:35.432866 master-0 kubenswrapper[33867]: I0219 03:39:35.432628 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fdbk4" event={"ID":"28d69938-9e32-4f94-afcd-db24ad9fde34","Type":"ContainerDied","Data":"eb9f0acfaeaed9258806140fb6ca98f4342cb34f558dcb95cd790bccb2aa1683"} Feb 19 03:39:35.432866 master-0 kubenswrapper[33867]: I0219 03:39:35.432832 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fdbk4" event={"ID":"28d69938-9e32-4f94-afcd-db24ad9fde34","Type":"ContainerStarted","Data":"ab44a9af7e0685f709414c114fafdb0738a78f02fcc4c310666f27f1fe9885ef"} Feb 19 03:39:35.435102 master-0 kubenswrapper[33867]: I0219 03:39:35.434986 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j9b2d" event={"ID":"4fe3535d-e926-4941-ac29-a9af927e1fd9","Type":"ContainerStarted","Data":"0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7"} Feb 19 03:39:35.437420 master-0 kubenswrapper[33867]: I0219 03:39:35.437344 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3d8b-account-create-update-h4wh9" event={"ID":"b938784f-b544-4020-a421-1d886966170c","Type":"ContainerStarted","Data":"69ded7c6d4e31baa8649c379a1704c0f8d302f46777238996c09c9500fb0c94f"} Feb 19 03:39:35.437523 master-0 kubenswrapper[33867]: I0219 03:39:35.437492 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3d8b-account-create-update-h4wh9" event={"ID":"b938784f-b544-4020-a421-1d886966170c","Type":"ContainerStarted","Data":"54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519"} Feb 19 03:39:35.438894 master-0 kubenswrapper[33867]: I0219 03:39:35.438860 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8a6d-account-create-update-2gsvr" event={"ID":"68e3386c-4280-492d-b87c-f6d9ae925f35","Type":"ContainerStarted","Data":"8124598d94621f2a22cf08cadf6d8b33fd179c30045becc87d95a15fd6cf8771"} Feb 19 03:39:35.477982 master-0 kubenswrapper[33867]: I0219 03:39:35.477855 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3d8b-account-create-update-h4wh9" podStartSLOduration=2.47782672 podStartE2EDuration="2.47782672s" podCreationTimestamp="2026-02-19 03:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:35.474338251 +0000 UTC m=+980.771008862" watchObservedRunningTime="2026-02-19 03:39:35.47782672 +0000 UTC m=+980.774497331" Feb 19 03:39:35.488660 master-0 kubenswrapper[33867]: I0219 03:39:35.488493 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:39:35.643689 master-0 kubenswrapper[33867]: I0219 03:39:35.643544 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:35.643914 master-0 kubenswrapper[33867]: I0219 03:39:35.643797 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="dnsmasq-dns" containerID="cri-o://a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6" gracePeriod=10 Feb 19 03:39:36.411806 master-0 kubenswrapper[33867]: I0219 03:39:36.411731 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:36.481359 master-0 kubenswrapper[33867]: I0219 03:39:36.479930 33867 generic.go:334] "Generic (PLEG): container finished" podID="4fe3535d-e926-4941-ac29-a9af927e1fd9" containerID="603c2ca1b1f7567e4c614f3f791f9fac4b5b6b3ae5745160f6c0a3f7fc2fb736" exitCode=0 Feb 19 03:39:36.481359 master-0 kubenswrapper[33867]: I0219 03:39:36.479997 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j9b2d" event={"ID":"4fe3535d-e926-4941-ac29-a9af927e1fd9","Type":"ContainerDied","Data":"603c2ca1b1f7567e4c614f3f791f9fac4b5b6b3ae5745160f6c0a3f7fc2fb736"} Feb 19 03:39:36.484384 master-0 kubenswrapper[33867]: I0219 03:39:36.483344 33867 generic.go:334] "Generic (PLEG): container finished" podID="b938784f-b544-4020-a421-1d886966170c" containerID="69ded7c6d4e31baa8649c379a1704c0f8d302f46777238996c09c9500fb0c94f" exitCode=0 Feb 19 03:39:36.484384 master-0 kubenswrapper[33867]: I0219 03:39:36.483413 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3d8b-account-create-update-h4wh9" event={"ID":"b938784f-b544-4020-a421-1d886966170c","Type":"ContainerDied","Data":"69ded7c6d4e31baa8649c379a1704c0f8d302f46777238996c09c9500fb0c94f"} Feb 19 03:39:36.488380 master-0 kubenswrapper[33867]: I0219 03:39:36.488334 33867 generic.go:334] "Generic (PLEG): container finished" podID="68e3386c-4280-492d-b87c-f6d9ae925f35" containerID="d1072ee730f646ef2d1e47eafdf25e50e3f661e876948f9f506020eae9fa8722" exitCode=0 Feb 19 03:39:36.488494 master-0 kubenswrapper[33867]: I0219 03:39:36.488401 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8a6d-account-create-update-2gsvr" event={"ID":"68e3386c-4280-492d-b87c-f6d9ae925f35","Type":"ContainerDied","Data":"d1072ee730f646ef2d1e47eafdf25e50e3f661e876948f9f506020eae9fa8722"} Feb 19 03:39:36.492974 master-0 kubenswrapper[33867]: I0219 03:39:36.492915 33867 generic.go:334] "Generic (PLEG): container finished" podID="f979b596-ca78-48f5-9293-10a51736d202" containerID="a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6" exitCode=0 Feb 19 03:39:36.493188 master-0 kubenswrapper[33867]: I0219 03:39:36.493160 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" Feb 19 03:39:36.494016 master-0 kubenswrapper[33867]: I0219 03:39:36.493983 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" event={"ID":"f979b596-ca78-48f5-9293-10a51736d202","Type":"ContainerDied","Data":"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6"} Feb 19 03:39:36.494059 master-0 kubenswrapper[33867]: I0219 03:39:36.494022 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-tkr48" event={"ID":"f979b596-ca78-48f5-9293-10a51736d202","Type":"ContainerDied","Data":"242f80d572aa94e5f9f58233683d1bf3fc3cbbaadc1c02913c93801538671002"} Feb 19 03:39:36.494059 master-0 kubenswrapper[33867]: I0219 03:39:36.494041 33867 scope.go:117] "RemoveContainer" containerID="a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6" Feb 19 03:39:36.514412 master-0 kubenswrapper[33867]: I0219 03:39:36.514342 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb\") pod \"f979b596-ca78-48f5-9293-10a51736d202\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " Feb 19 03:39:36.514639 master-0 kubenswrapper[33867]: I0219 03:39:36.514580 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config\") pod \"f979b596-ca78-48f5-9293-10a51736d202\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " Feb 19 03:39:36.514760 master-0 kubenswrapper[33867]: I0219 03:39:36.514736 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgq5x\" (UniqueName: \"kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x\") pod \"f979b596-ca78-48f5-9293-10a51736d202\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " Feb 19 03:39:36.514818 master-0 kubenswrapper[33867]: I0219 03:39:36.514798 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc\") pod \"f979b596-ca78-48f5-9293-10a51736d202\" (UID: \"f979b596-ca78-48f5-9293-10a51736d202\") " Feb 19 03:39:36.547209 master-0 kubenswrapper[33867]: I0219 03:39:36.544817 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x" (OuterVolumeSpecName: "kube-api-access-wgq5x") pod "f979b596-ca78-48f5-9293-10a51736d202" (UID: "f979b596-ca78-48f5-9293-10a51736d202"). InnerVolumeSpecName "kube-api-access-wgq5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:36.553599 master-0 kubenswrapper[33867]: I0219 03:39:36.552472 33867 scope.go:117] "RemoveContainer" containerID="db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5" Feb 19 03:39:36.617658 master-0 kubenswrapper[33867]: I0219 03:39:36.617591 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgq5x\" (UniqueName: \"kubernetes.io/projected/f979b596-ca78-48f5-9293-10a51736d202-kube-api-access-wgq5x\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:36.633304 master-0 kubenswrapper[33867]: I0219 03:39:36.632341 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config" (OuterVolumeSpecName: "config") pod "f979b596-ca78-48f5-9293-10a51736d202" (UID: "f979b596-ca78-48f5-9293-10a51736d202"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:36.647050 master-0 kubenswrapper[33867]: I0219 03:39:36.646900 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f979b596-ca78-48f5-9293-10a51736d202" (UID: "f979b596-ca78-48f5-9293-10a51736d202"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:36.658300 master-0 kubenswrapper[33867]: I0219 03:39:36.658212 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f979b596-ca78-48f5-9293-10a51736d202" (UID: "f979b596-ca78-48f5-9293-10a51736d202"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:36.720398 master-0 kubenswrapper[33867]: I0219 03:39:36.720243 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:36.720398 master-0 kubenswrapper[33867]: I0219 03:39:36.720363 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:36.720398 master-0 kubenswrapper[33867]: I0219 03:39:36.720382 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f979b596-ca78-48f5-9293-10a51736d202-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:36.770555 master-0 kubenswrapper[33867]: I0219 03:39:36.768542 33867 scope.go:117] "RemoveContainer" containerID="a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6" Feb 19 03:39:36.772171 master-0 kubenswrapper[33867]: E0219 03:39:36.770758 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6\": container with ID starting with a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6 not found: ID does not exist" containerID="a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6" Feb 19 03:39:36.772171 master-0 kubenswrapper[33867]: I0219 03:39:36.770793 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6"} err="failed to get container status \"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6\": rpc error: code = NotFound desc = could not find container \"a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6\": container with ID starting with a539558338e6958f2839ed892c4c6a6de7b5f21873b8aa70ad42ad71958f06c6 not found: ID does not exist" Feb 19 03:39:36.772171 master-0 kubenswrapper[33867]: I0219 03:39:36.770818 33867 scope.go:117] "RemoveContainer" containerID="db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5" Feb 19 03:39:36.772171 master-0 kubenswrapper[33867]: E0219 03:39:36.771734 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5\": container with ID starting with db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5 not found: ID does not exist" containerID="db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5" Feb 19 03:39:36.772171 master-0 kubenswrapper[33867]: I0219 03:39:36.771822 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5"} err="failed to get container status \"db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5\": rpc error: code = NotFound desc = could not find container \"db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5\": container with ID starting with db9cf904ef6356560db88317346b31d4f4d03ef5382d3929751348d92d784cf5 not found: ID does not exist" Feb 19 03:39:36.860850 master-0 kubenswrapper[33867]: I0219 03:39:36.857213 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:36.869045 master-0 kubenswrapper[33867]: I0219 03:39:36.868935 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-tkr48"] Feb 19 03:39:36.976934 master-0 kubenswrapper[33867]: I0219 03:39:36.976852 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f979b596-ca78-48f5-9293-10a51736d202" path="/var/lib/kubelet/pods/f979b596-ca78-48f5-9293-10a51736d202/volumes" Feb 19 03:39:37.029881 master-0 kubenswrapper[33867]: I0219 03:39:37.029828 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:37.151507 master-0 kubenswrapper[33867]: I0219 03:39:37.151426 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts\") pod \"28d69938-9e32-4f94-afcd-db24ad9fde34\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " Feb 19 03:39:37.151773 master-0 kubenswrapper[33867]: I0219 03:39:37.151529 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9c7b\" (UniqueName: \"kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b\") pod \"28d69938-9e32-4f94-afcd-db24ad9fde34\" (UID: \"28d69938-9e32-4f94-afcd-db24ad9fde34\") " Feb 19 03:39:37.152216 master-0 kubenswrapper[33867]: I0219 03:39:37.152154 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28d69938-9e32-4f94-afcd-db24ad9fde34" (UID: "28d69938-9e32-4f94-afcd-db24ad9fde34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:37.152583 master-0 kubenswrapper[33867]: I0219 03:39:37.152555 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28d69938-9e32-4f94-afcd-db24ad9fde34-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:37.154772 master-0 kubenswrapper[33867]: I0219 03:39:37.154714 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b" (OuterVolumeSpecName: "kube-api-access-h9c7b") pod "28d69938-9e32-4f94-afcd-db24ad9fde34" (UID: "28d69938-9e32-4f94-afcd-db24ad9fde34"). InnerVolumeSpecName "kube-api-access-h9c7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:37.255368 master-0 kubenswrapper[33867]: I0219 03:39:37.255319 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9c7b\" (UniqueName: \"kubernetes.io/projected/28d69938-9e32-4f94-afcd-db24ad9fde34-kube-api-access-h9c7b\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:37.509239 master-0 kubenswrapper[33867]: I0219 03:39:37.509077 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fdbk4" Feb 19 03:39:37.510472 master-0 kubenswrapper[33867]: I0219 03:39:37.510387 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fdbk4" event={"ID":"28d69938-9e32-4f94-afcd-db24ad9fde34","Type":"ContainerDied","Data":"ab44a9af7e0685f709414c114fafdb0738a78f02fcc4c310666f27f1fe9885ef"} Feb 19 03:39:37.510472 master-0 kubenswrapper[33867]: I0219 03:39:37.510467 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab44a9af7e0685f709414c114fafdb0738a78f02fcc4c310666f27f1fe9885ef" Feb 19 03:39:37.678474 master-0 kubenswrapper[33867]: I0219 03:39:37.678356 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nzmld"] Feb 19 03:39:37.679080 master-0 kubenswrapper[33867]: E0219 03:39:37.679039 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="dnsmasq-dns" Feb 19 03:39:37.679080 master-0 kubenswrapper[33867]: I0219 03:39:37.679065 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="dnsmasq-dns" Feb 19 03:39:37.679218 master-0 kubenswrapper[33867]: E0219 03:39:37.679088 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d69938-9e32-4f94-afcd-db24ad9fde34" containerName="mariadb-database-create" Feb 19 03:39:37.679218 master-0 kubenswrapper[33867]: I0219 03:39:37.679097 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d69938-9e32-4f94-afcd-db24ad9fde34" containerName="mariadb-database-create" Feb 19 03:39:37.679218 master-0 kubenswrapper[33867]: E0219 03:39:37.679116 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="init" Feb 19 03:39:37.679218 master-0 kubenswrapper[33867]: I0219 03:39:37.679124 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="init" Feb 19 03:39:37.679554 master-0 kubenswrapper[33867]: I0219 03:39:37.679443 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d69938-9e32-4f94-afcd-db24ad9fde34" containerName="mariadb-database-create" Feb 19 03:39:37.679554 master-0 kubenswrapper[33867]: I0219 03:39:37.679485 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f979b596-ca78-48f5-9293-10a51736d202" containerName="dnsmasq-dns" Feb 19 03:39:37.680617 master-0 kubenswrapper[33867]: I0219 03:39:37.680580 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.723050 master-0 kubenswrapper[33867]: I0219 03:39:37.721556 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nzmld"] Feb 19 03:39:37.769285 master-0 kubenswrapper[33867]: E0219 03:39:37.766959 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d69938_9e32_4f94_afcd_db24ad9fde34.slice/crio-ab44a9af7e0685f709414c114fafdb0738a78f02fcc4c310666f27f1fe9885ef\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d69938_9e32_4f94_afcd_db24ad9fde34.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:39:37.769285 master-0 kubenswrapper[33867]: E0219 03:39:37.767167 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d69938_9e32_4f94_afcd_db24ad9fde34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d69938_9e32_4f94_afcd_db24ad9fde34.slice/crio-ab44a9af7e0685f709414c114fafdb0738a78f02fcc4c310666f27f1fe9885ef\": RecentStats: unable to find data in memory cache]" Feb 19 03:39:37.772597 master-0 kubenswrapper[33867]: I0219 03:39:37.772348 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.774898 master-0 kubenswrapper[33867]: I0219 03:39:37.772962 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v282x\" (UniqueName: \"kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.817591 master-0 kubenswrapper[33867]: I0219 03:39:37.817506 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8e36-account-create-update-kvwtv"] Feb 19 03:39:37.822034 master-0 kubenswrapper[33867]: I0219 03:39:37.821990 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.825592 master-0 kubenswrapper[33867]: I0219 03:39:37.825552 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 03:39:37.862529 master-0 kubenswrapper[33867]: I0219 03:39:37.862448 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8e36-account-create-update-kvwtv"] Feb 19 03:39:37.875277 master-0 kubenswrapper[33867]: I0219 03:39:37.875175 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v282x\" (UniqueName: \"kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.875583 master-0 kubenswrapper[33867]: I0219 03:39:37.875374 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rljf\" (UniqueName: \"kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.875583 master-0 kubenswrapper[33867]: I0219 03:39:37.875413 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.875583 master-0 kubenswrapper[33867]: I0219 03:39:37.875457 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.876595 master-0 kubenswrapper[33867]: I0219 03:39:37.876567 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.897747 master-0 kubenswrapper[33867]: I0219 03:39:37.897712 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v282x\" (UniqueName: \"kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x\") pod \"glance-db-create-nzmld\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " pod="openstack/glance-db-create-nzmld" Feb 19 03:39:37.976964 master-0 kubenswrapper[33867]: I0219 03:39:37.976899 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rljf\" (UniqueName: \"kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.977196 master-0 kubenswrapper[33867]: I0219 03:39:37.977002 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.978025 master-0 kubenswrapper[33867]: I0219 03:39:37.977991 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:37.994742 master-0 kubenswrapper[33867]: I0219 03:39:37.994709 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:37.997129 master-0 kubenswrapper[33867]: I0219 03:39:37.997067 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rljf\" (UniqueName: \"kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf\") pod \"glance-8e36-account-create-update-kvwtv\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:38.000943 master-0 kubenswrapper[33867]: I0219 03:39:38.000913 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nzmld" Feb 19 03:39:38.083828 master-0 kubenswrapper[33867]: I0219 03:39:38.083430 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45zs7\" (UniqueName: \"kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7\") pod \"4fe3535d-e926-4941-ac29-a9af927e1fd9\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " Feb 19 03:39:38.083828 master-0 kubenswrapper[33867]: I0219 03:39:38.083737 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts\") pod \"4fe3535d-e926-4941-ac29-a9af927e1fd9\" (UID: \"4fe3535d-e926-4941-ac29-a9af927e1fd9\") " Feb 19 03:39:38.086245 master-0 kubenswrapper[33867]: I0219 03:39:38.084512 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fe3535d-e926-4941-ac29-a9af927e1fd9" (UID: "4fe3535d-e926-4941-ac29-a9af927e1fd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:38.086722 master-0 kubenswrapper[33867]: I0219 03:39:38.086683 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe3535d-e926-4941-ac29-a9af927e1fd9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.092386 master-0 kubenswrapper[33867]: I0219 03:39:38.092310 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7" (OuterVolumeSpecName: "kube-api-access-45zs7") pod "4fe3535d-e926-4941-ac29-a9af927e1fd9" (UID: "4fe3535d-e926-4941-ac29-a9af927e1fd9"). InnerVolumeSpecName "kube-api-access-45zs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:38.166648 master-0 kubenswrapper[33867]: I0219 03:39:38.160912 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:38.192902 master-0 kubenswrapper[33867]: I0219 03:39:38.192830 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45zs7\" (UniqueName: \"kubernetes.io/projected/4fe3535d-e926-4941-ac29-a9af927e1fd9-kube-api-access-45zs7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.328628 master-0 kubenswrapper[33867]: I0219 03:39:38.328588 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:38.365412 master-0 kubenswrapper[33867]: I0219 03:39:38.364630 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:38.395719 master-0 kubenswrapper[33867]: I0219 03:39:38.395670 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xlsm\" (UniqueName: \"kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm\") pod \"b938784f-b544-4020-a421-1d886966170c\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " Feb 19 03:39:38.396129 master-0 kubenswrapper[33867]: I0219 03:39:38.396108 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts\") pod \"b938784f-b544-4020-a421-1d886966170c\" (UID: \"b938784f-b544-4020-a421-1d886966170c\") " Feb 19 03:39:38.396934 master-0 kubenswrapper[33867]: I0219 03:39:38.396892 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b938784f-b544-4020-a421-1d886966170c" (UID: "b938784f-b544-4020-a421-1d886966170c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:38.399352 master-0 kubenswrapper[33867]: I0219 03:39:38.399303 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm" (OuterVolumeSpecName: "kube-api-access-4xlsm") pod "b938784f-b544-4020-a421-1d886966170c" (UID: "b938784f-b544-4020-a421-1d886966170c"). InnerVolumeSpecName "kube-api-access-4xlsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:38.500362 master-0 kubenswrapper[33867]: I0219 03:39:38.500309 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts\") pod \"68e3386c-4280-492d-b87c-f6d9ae925f35\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " Feb 19 03:39:38.500865 master-0 kubenswrapper[33867]: I0219 03:39:38.500838 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqpcb\" (UniqueName: \"kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb\") pod \"68e3386c-4280-492d-b87c-f6d9ae925f35\" (UID: \"68e3386c-4280-492d-b87c-f6d9ae925f35\") " Feb 19 03:39:38.500976 master-0 kubenswrapper[33867]: I0219 03:39:38.500945 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68e3386c-4280-492d-b87c-f6d9ae925f35" (UID: "68e3386c-4280-492d-b87c-f6d9ae925f35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:38.501707 master-0 kubenswrapper[33867]: I0219 03:39:38.501686 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e3386c-4280-492d-b87c-f6d9ae925f35-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.501821 master-0 kubenswrapper[33867]: I0219 03:39:38.501805 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xlsm\" (UniqueName: \"kubernetes.io/projected/b938784f-b544-4020-a421-1d886966170c-kube-api-access-4xlsm\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.501926 master-0 kubenswrapper[33867]: I0219 03:39:38.501913 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b938784f-b544-4020-a421-1d886966170c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.504544 master-0 kubenswrapper[33867]: I0219 03:39:38.504299 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb" (OuterVolumeSpecName: "kube-api-access-xqpcb") pod "68e3386c-4280-492d-b87c-f6d9ae925f35" (UID: "68e3386c-4280-492d-b87c-f6d9ae925f35"). InnerVolumeSpecName "kube-api-access-xqpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:38.526821 master-0 kubenswrapper[33867]: I0219 03:39:38.526707 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3d8b-account-create-update-h4wh9" event={"ID":"b938784f-b544-4020-a421-1d886966170c","Type":"ContainerDied","Data":"54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519"} Feb 19 03:39:38.526821 master-0 kubenswrapper[33867]: I0219 03:39:38.526809 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54cc78a0413e140d82c199abcb66efa6826be9969fd6352018a9fd7e5b014519" Feb 19 03:39:38.527500 master-0 kubenswrapper[33867]: I0219 03:39:38.526889 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3d8b-account-create-update-h4wh9" Feb 19 03:39:38.534028 master-0 kubenswrapper[33867]: I0219 03:39:38.533988 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8a6d-account-create-update-2gsvr" event={"ID":"68e3386c-4280-492d-b87c-f6d9ae925f35","Type":"ContainerDied","Data":"8124598d94621f2a22cf08cadf6d8b33fd179c30045becc87d95a15fd6cf8771"} Feb 19 03:39:38.534205 master-0 kubenswrapper[33867]: I0219 03:39:38.534035 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8124598d94621f2a22cf08cadf6d8b33fd179c30045becc87d95a15fd6cf8771" Feb 19 03:39:38.534205 master-0 kubenswrapper[33867]: I0219 03:39:38.534158 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8a6d-account-create-update-2gsvr" Feb 19 03:39:38.538754 master-0 kubenswrapper[33867]: I0219 03:39:38.538620 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j9b2d" event={"ID":"4fe3535d-e926-4941-ac29-a9af927e1fd9","Type":"ContainerDied","Data":"0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7"} Feb 19 03:39:38.538909 master-0 kubenswrapper[33867]: I0219 03:39:38.538763 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0709dbc3fd4f679f85e2e0245dc75623ef8d5c11759d452cd225f7f8db68f9d7" Feb 19 03:39:38.538909 master-0 kubenswrapper[33867]: I0219 03:39:38.538699 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j9b2d" Feb 19 03:39:38.615133 master-0 kubenswrapper[33867]: I0219 03:39:38.615062 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqpcb\" (UniqueName: \"kubernetes.io/projected/68e3386c-4280-492d-b87c-f6d9ae925f35-kube-api-access-xqpcb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:38.650049 master-0 kubenswrapper[33867]: W0219 03:39:38.649954 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8758db66_f063_425c_b8a4_3c6b519d7775.slice/crio-f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd WatchSource:0}: Error finding container f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd: Status 404 returned error can't find the container with id f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd Feb 19 03:39:38.653916 master-0 kubenswrapper[33867]: I0219 03:39:38.653831 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nzmld"] Feb 19 03:39:38.820178 master-0 kubenswrapper[33867]: I0219 03:39:38.820125 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8e36-account-create-update-kvwtv"] Feb 19 03:39:39.402353 master-0 kubenswrapper[33867]: I0219 03:39:39.402235 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gclpr"] Feb 19 03:39:39.402990 master-0 kubenswrapper[33867]: E0219 03:39:39.402960 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b938784f-b544-4020-a421-1d886966170c" containerName="mariadb-account-create-update" Feb 19 03:39:39.403050 master-0 kubenswrapper[33867]: I0219 03:39:39.402990 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b938784f-b544-4020-a421-1d886966170c" containerName="mariadb-account-create-update" Feb 19 03:39:39.403050 master-0 kubenswrapper[33867]: E0219 03:39:39.403015 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe3535d-e926-4941-ac29-a9af927e1fd9" containerName="mariadb-database-create" Feb 19 03:39:39.403050 master-0 kubenswrapper[33867]: I0219 03:39:39.403027 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe3535d-e926-4941-ac29-a9af927e1fd9" containerName="mariadb-database-create" Feb 19 03:39:39.403141 master-0 kubenswrapper[33867]: E0219 03:39:39.403078 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e3386c-4280-492d-b87c-f6d9ae925f35" containerName="mariadb-account-create-update" Feb 19 03:39:39.403141 master-0 kubenswrapper[33867]: I0219 03:39:39.403091 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e3386c-4280-492d-b87c-f6d9ae925f35" containerName="mariadb-account-create-update" Feb 19 03:39:39.403449 master-0 kubenswrapper[33867]: I0219 03:39:39.403425 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e3386c-4280-492d-b87c-f6d9ae925f35" containerName="mariadb-account-create-update" Feb 19 03:39:39.403530 master-0 kubenswrapper[33867]: I0219 03:39:39.403463 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b938784f-b544-4020-a421-1d886966170c" containerName="mariadb-account-create-update" Feb 19 03:39:39.403530 master-0 kubenswrapper[33867]: I0219 03:39:39.403486 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe3535d-e926-4941-ac29-a9af927e1fd9" containerName="mariadb-database-create" Feb 19 03:39:39.404481 master-0 kubenswrapper[33867]: I0219 03:39:39.404453 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.408627 master-0 kubenswrapper[33867]: I0219 03:39:39.408565 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 03:39:39.412642 master-0 kubenswrapper[33867]: I0219 03:39:39.412573 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gclpr"] Feb 19 03:39:39.538248 master-0 kubenswrapper[33867]: I0219 03:39:39.538190 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dblj\" (UniqueName: \"kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.539099 master-0 kubenswrapper[33867]: I0219 03:39:39.539075 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.552056 master-0 kubenswrapper[33867]: I0219 03:39:39.551994 33867 generic.go:334] "Generic (PLEG): container finished" podID="e5752474-93c5-40bc-b4c5-ac1fb797a211" containerID="d5921387d77b1b1d4d721e164a9c0b87d2bba12285b5a9f8a9815015d047386b" exitCode=0 Feb 19 03:39:39.552543 master-0 kubenswrapper[33867]: I0219 03:39:39.552409 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8e36-account-create-update-kvwtv" event={"ID":"e5752474-93c5-40bc-b4c5-ac1fb797a211","Type":"ContainerDied","Data":"d5921387d77b1b1d4d721e164a9c0b87d2bba12285b5a9f8a9815015d047386b"} Feb 19 03:39:39.552543 master-0 kubenswrapper[33867]: I0219 03:39:39.552482 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8e36-account-create-update-kvwtv" event={"ID":"e5752474-93c5-40bc-b4c5-ac1fb797a211","Type":"ContainerStarted","Data":"57926e0bd2e30a177758d2d5520683197fa1295468736b7f06508a0a1ed2eb9f"} Feb 19 03:39:39.554241 master-0 kubenswrapper[33867]: I0219 03:39:39.554203 33867 generic.go:334] "Generic (PLEG): container finished" podID="8758db66-f063-425c-b8a4-3c6b519d7775" containerID="6ddfd6bd4e3bee2a03f0cb0b73eb42597059996dadbd7af16d50234aaf8d3e9c" exitCode=0 Feb 19 03:39:39.554384 master-0 kubenswrapper[33867]: I0219 03:39:39.554359 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nzmld" event={"ID":"8758db66-f063-425c-b8a4-3c6b519d7775","Type":"ContainerDied","Data":"6ddfd6bd4e3bee2a03f0cb0b73eb42597059996dadbd7af16d50234aaf8d3e9c"} Feb 19 03:39:39.554496 master-0 kubenswrapper[33867]: I0219 03:39:39.554479 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nzmld" event={"ID":"8758db66-f063-425c-b8a4-3c6b519d7775","Type":"ContainerStarted","Data":"f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd"} Feb 19 03:39:39.644801 master-0 kubenswrapper[33867]: I0219 03:39:39.644711 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dblj\" (UniqueName: \"kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.645480 master-0 kubenswrapper[33867]: I0219 03:39:39.645444 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.646616 master-0 kubenswrapper[33867]: I0219 03:39:39.646580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.663514 master-0 kubenswrapper[33867]: I0219 03:39:39.663410 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dblj\" (UniqueName: \"kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj\") pod \"root-account-create-update-gclpr\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:39.725168 master-0 kubenswrapper[33867]: I0219 03:39:39.724403 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:40.204341 master-0 kubenswrapper[33867]: I0219 03:39:40.204263 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gclpr"] Feb 19 03:39:40.218359 master-0 kubenswrapper[33867]: W0219 03:39:40.215515 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9b5c3e_c893_4a3d_bd06_65872e79846f.slice/crio-d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19 WatchSource:0}: Error finding container d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19: Status 404 returned error can't find the container with id d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19 Feb 19 03:39:40.607101 master-0 kubenswrapper[33867]: I0219 03:39:40.606972 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gclpr" event={"ID":"cd9b5c3e-c893-4a3d-bd06-65872e79846f","Type":"ContainerStarted","Data":"206d5c31243b738552d8316eef6e6a53d8450a39441aac33e7dbd8d8724fc3ff"} Feb 19 03:39:40.607101 master-0 kubenswrapper[33867]: I0219 03:39:40.607040 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gclpr" event={"ID":"cd9b5c3e-c893-4a3d-bd06-65872e79846f","Type":"ContainerStarted","Data":"d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19"} Feb 19 03:39:41.199699 master-0 kubenswrapper[33867]: I0219 03:39:41.199602 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nzmld" Feb 19 03:39:41.208482 master-0 kubenswrapper[33867]: I0219 03:39:41.208397 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:41.234029 master-0 kubenswrapper[33867]: I0219 03:39:41.233888 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-gclpr" podStartSLOduration=2.233853637 podStartE2EDuration="2.233853637s" podCreationTimestamp="2026-02-19 03:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:40.63851081 +0000 UTC m=+985.935181411" watchObservedRunningTime="2026-02-19 03:39:41.233853637 +0000 UTC m=+986.530524258" Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.281364 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts\") pod \"8758db66-f063-425c-b8a4-3c6b519d7775\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.281524 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v282x\" (UniqueName: \"kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x\") pod \"8758db66-f063-425c-b8a4-3c6b519d7775\" (UID: \"8758db66-f063-425c-b8a4-3c6b519d7775\") " Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.281646 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts\") pod \"e5752474-93c5-40bc-b4c5-ac1fb797a211\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.281722 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rljf\" (UniqueName: \"kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf\") pod \"e5752474-93c5-40bc-b4c5-ac1fb797a211\" (UID: \"e5752474-93c5-40bc-b4c5-ac1fb797a211\") " Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.281874 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8758db66-f063-425c-b8a4-3c6b519d7775" (UID: "8758db66-f063-425c-b8a4-3c6b519d7775"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:41.282318 master-0 kubenswrapper[33867]: I0219 03:39:41.282091 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5752474-93c5-40bc-b4c5-ac1fb797a211" (UID: "e5752474-93c5-40bc-b4c5-ac1fb797a211"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:41.282757 master-0 kubenswrapper[33867]: I0219 03:39:41.282368 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5752474-93c5-40bc-b4c5-ac1fb797a211-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:41.282757 master-0 kubenswrapper[33867]: I0219 03:39:41.282383 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8758db66-f063-425c-b8a4-3c6b519d7775-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:41.285107 master-0 kubenswrapper[33867]: I0219 03:39:41.285047 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x" (OuterVolumeSpecName: "kube-api-access-v282x") pod "8758db66-f063-425c-b8a4-3c6b519d7775" (UID: "8758db66-f063-425c-b8a4-3c6b519d7775"). InnerVolumeSpecName "kube-api-access-v282x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:41.285879 master-0 kubenswrapper[33867]: I0219 03:39:41.285846 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf" (OuterVolumeSpecName: "kube-api-access-9rljf") pod "e5752474-93c5-40bc-b4c5-ac1fb797a211" (UID: "e5752474-93c5-40bc-b4c5-ac1fb797a211"). InnerVolumeSpecName "kube-api-access-9rljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:41.383973 master-0 kubenswrapper[33867]: I0219 03:39:41.383893 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v282x\" (UniqueName: \"kubernetes.io/projected/8758db66-f063-425c-b8a4-3c6b519d7775-kube-api-access-v282x\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:41.383973 master-0 kubenswrapper[33867]: I0219 03:39:41.383948 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rljf\" (UniqueName: \"kubernetes.io/projected/e5752474-93c5-40bc-b4c5-ac1fb797a211-kube-api-access-9rljf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:41.620487 master-0 kubenswrapper[33867]: I0219 03:39:41.620361 33867 generic.go:334] "Generic (PLEG): container finished" podID="cd9b5c3e-c893-4a3d-bd06-65872e79846f" containerID="206d5c31243b738552d8316eef6e6a53d8450a39441aac33e7dbd8d8724fc3ff" exitCode=0 Feb 19 03:39:41.621148 master-0 kubenswrapper[33867]: I0219 03:39:41.620444 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gclpr" event={"ID":"cd9b5c3e-c893-4a3d-bd06-65872e79846f","Type":"ContainerDied","Data":"206d5c31243b738552d8316eef6e6a53d8450a39441aac33e7dbd8d8724fc3ff"} Feb 19 03:39:41.625107 master-0 kubenswrapper[33867]: I0219 03:39:41.625069 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nzmld" event={"ID":"8758db66-f063-425c-b8a4-3c6b519d7775","Type":"ContainerDied","Data":"f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd"} Feb 19 03:39:41.625107 master-0 kubenswrapper[33867]: I0219 03:39:41.625107 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f54a97be8a56e53a37c02b3bfe9d8e4af12b7f42fe1859d1a3b5d79d983f05fd" Feb 19 03:39:41.625359 master-0 kubenswrapper[33867]: I0219 03:39:41.625110 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nzmld" Feb 19 03:39:41.626832 master-0 kubenswrapper[33867]: I0219 03:39:41.626775 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8e36-account-create-update-kvwtv" event={"ID":"e5752474-93c5-40bc-b4c5-ac1fb797a211","Type":"ContainerDied","Data":"57926e0bd2e30a177758d2d5520683197fa1295468736b7f06508a0a1ed2eb9f"} Feb 19 03:39:41.626832 master-0 kubenswrapper[33867]: I0219 03:39:41.626802 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57926e0bd2e30a177758d2d5520683197fa1295468736b7f06508a0a1ed2eb9f" Feb 19 03:39:41.627405 master-0 kubenswrapper[33867]: I0219 03:39:41.627335 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8e36-account-create-update-kvwtv" Feb 19 03:39:41.629107 master-0 kubenswrapper[33867]: I0219 03:39:41.629060 33867 generic.go:334] "Generic (PLEG): container finished" podID="6bdc624f-2b02-4f65-93e7-49b26b1da384" containerID="0f56be12ab8653d1efd235eddcbfd8386d076dcc1423e6c7277149d5c9adf3b2" exitCode=0 Feb 19 03:39:41.629107 master-0 kubenswrapper[33867]: I0219 03:39:41.629103 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xnwxz" event={"ID":"6bdc624f-2b02-4f65-93e7-49b26b1da384","Type":"ContainerDied","Data":"0f56be12ab8653d1efd235eddcbfd8386d076dcc1423e6c7277149d5c9adf3b2"} Feb 19 03:39:42.948273 master-0 kubenswrapper[33867]: I0219 03:39:42.948060 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ggcz5"] Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: E0219 03:39:42.950148 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8758db66-f063-425c-b8a4-3c6b519d7775" containerName="mariadb-database-create" Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: I0219 03:39:42.950184 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8758db66-f063-425c-b8a4-3c6b519d7775" containerName="mariadb-database-create" Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: E0219 03:39:42.950275 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5752474-93c5-40bc-b4c5-ac1fb797a211" containerName="mariadb-account-create-update" Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: I0219 03:39:42.950288 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5752474-93c5-40bc-b4c5-ac1fb797a211" containerName="mariadb-account-create-update" Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: I0219 03:39:42.950718 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8758db66-f063-425c-b8a4-3c6b519d7775" containerName="mariadb-database-create" Feb 19 03:39:42.951420 master-0 kubenswrapper[33867]: I0219 03:39:42.950758 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5752474-93c5-40bc-b4c5-ac1fb797a211" containerName="mariadb-account-create-update" Feb 19 03:39:42.952656 master-0 kubenswrapper[33867]: I0219 03:39:42.952627 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:42.959090 master-0 kubenswrapper[33867]: I0219 03:39:42.959031 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-config-data" Feb 19 03:39:42.977283 master-0 kubenswrapper[33867]: I0219 03:39:42.977145 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ggcz5"] Feb 19 03:39:43.029112 master-0 kubenswrapper[33867]: I0219 03:39:43.028947 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzr9r\" (UniqueName: \"kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.029112 master-0 kubenswrapper[33867]: I0219 03:39:43.029039 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.029432 master-0 kubenswrapper[33867]: I0219 03:39:43.029148 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.029432 master-0 kubenswrapper[33867]: I0219 03:39:43.029371 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.131786 master-0 kubenswrapper[33867]: I0219 03:39:43.131709 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.131786 master-0 kubenswrapper[33867]: I0219 03:39:43.131795 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:43.132057 master-0 kubenswrapper[33867]: I0219 03:39:43.131818 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.132057 master-0 kubenswrapper[33867]: I0219 03:39:43.131915 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.132057 master-0 kubenswrapper[33867]: I0219 03:39:43.132014 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzr9r\" (UniqueName: \"kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.138375 master-0 kubenswrapper[33867]: I0219 03:39:43.136639 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.147418 master-0 kubenswrapper[33867]: I0219 03:39:43.139762 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.147418 master-0 kubenswrapper[33867]: I0219 03:39:43.139822 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.147418 master-0 kubenswrapper[33867]: I0219 03:39:43.141443 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aea865d8-841e-4326-9833-ee28b81c18e1-etc-swift\") pod \"swift-storage-0\" (UID: \"aea865d8-841e-4326-9833-ee28b81c18e1\") " pod="openstack/swift-storage-0" Feb 19 03:39:43.152382 master-0 kubenswrapper[33867]: I0219 03:39:43.151457 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzr9r\" (UniqueName: \"kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r\") pod \"glance-db-sync-ggcz5\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.253958 master-0 kubenswrapper[33867]: I0219 03:39:43.253762 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:43.276309 master-0 kubenswrapper[33867]: I0219 03:39:43.276221 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ggcz5" Feb 19 03:39:43.337649 master-0 kubenswrapper[33867]: I0219 03:39:43.337567 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts\") pod \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " Feb 19 03:39:43.338054 master-0 kubenswrapper[33867]: I0219 03:39:43.337721 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dblj\" (UniqueName: \"kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj\") pod \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\" (UID: \"cd9b5c3e-c893-4a3d-bd06-65872e79846f\") " Feb 19 03:39:43.338632 master-0 kubenswrapper[33867]: I0219 03:39:43.338582 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd9b5c3e-c893-4a3d-bd06-65872e79846f" (UID: "cd9b5c3e-c893-4a3d-bd06-65872e79846f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:43.338959 master-0 kubenswrapper[33867]: I0219 03:39:43.338915 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9b5c3e-c893-4a3d-bd06-65872e79846f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.340910 master-0 kubenswrapper[33867]: I0219 03:39:43.340874 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj" (OuterVolumeSpecName: "kube-api-access-5dblj") pod "cd9b5c3e-c893-4a3d-bd06-65872e79846f" (UID: "cd9b5c3e-c893-4a3d-bd06-65872e79846f"). InnerVolumeSpecName "kube-api-access-5dblj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:43.399189 master-0 kubenswrapper[33867]: I0219 03:39:43.399114 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 03:39:43.422169 master-0 kubenswrapper[33867]: I0219 03:39:43.422106 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:43.441672 master-0 kubenswrapper[33867]: I0219 03:39:43.441613 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dblj\" (UniqueName: \"kubernetes.io/projected/cd9b5c3e-c893-4a3d-bd06-65872e79846f-kube-api-access-5dblj\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.542534 master-0 kubenswrapper[33867]: I0219 03:39:43.542422 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542534 master-0 kubenswrapper[33867]: I0219 03:39:43.542486 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzjdp\" (UniqueName: \"kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542762 master-0 kubenswrapper[33867]: I0219 03:39:43.542603 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542762 master-0 kubenswrapper[33867]: I0219 03:39:43.542710 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542762 master-0 kubenswrapper[33867]: I0219 03:39:43.542762 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542900 master-0 kubenswrapper[33867]: I0219 03:39:43.542817 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.542900 master-0 kubenswrapper[33867]: I0219 03:39:43.542844 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf\") pod \"6bdc624f-2b02-4f65-93e7-49b26b1da384\" (UID: \"6bdc624f-2b02-4f65-93e7-49b26b1da384\") " Feb 19 03:39:43.544570 master-0 kubenswrapper[33867]: I0219 03:39:43.544532 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:39:43.546649 master-0 kubenswrapper[33867]: I0219 03:39:43.546599 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:43.547571 master-0 kubenswrapper[33867]: I0219 03:39:43.547532 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp" (OuterVolumeSpecName: "kube-api-access-dzjdp") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "kube-api-access-dzjdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:39:43.551025 master-0 kubenswrapper[33867]: I0219 03:39:43.550966 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:39:43.567800 master-0 kubenswrapper[33867]: I0219 03:39:43.567694 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts" (OuterVolumeSpecName: "scripts") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:39:43.574200 master-0 kubenswrapper[33867]: I0219 03:39:43.574107 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:39:43.584206 master-0 kubenswrapper[33867]: I0219 03:39:43.581098 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6bdc624f-2b02-4f65-93e7-49b26b1da384" (UID: "6bdc624f-2b02-4f65-93e7-49b26b1da384"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:39:43.646915 master-0 kubenswrapper[33867]: I0219 03:39:43.646841 33867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bdc624f-2b02-4f65-93e7-49b26b1da384-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.646915 master-0 kubenswrapper[33867]: I0219 03:39:43.646902 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.646915 master-0 kubenswrapper[33867]: I0219 03:39:43.646916 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.647207 master-0 kubenswrapper[33867]: I0219 03:39:43.646931 33867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bdc624f-2b02-4f65-93e7-49b26b1da384-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.647207 master-0 kubenswrapper[33867]: I0219 03:39:43.646947 33867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.647207 master-0 kubenswrapper[33867]: I0219 03:39:43.646960 33867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bdc624f-2b02-4f65-93e7-49b26b1da384-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.647207 master-0 kubenswrapper[33867]: I0219 03:39:43.646975 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzjdp\" (UniqueName: \"kubernetes.io/projected/6bdc624f-2b02-4f65-93e7-49b26b1da384-kube-api-access-dzjdp\") on node \"master-0\" DevicePath \"\"" Feb 19 03:39:43.684890 master-0 kubenswrapper[33867]: I0219 03:39:43.683229 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gclpr" Feb 19 03:39:43.684890 master-0 kubenswrapper[33867]: I0219 03:39:43.684214 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gclpr" event={"ID":"cd9b5c3e-c893-4a3d-bd06-65872e79846f","Type":"ContainerDied","Data":"d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19"} Feb 19 03:39:43.684890 master-0 kubenswrapper[33867]: I0219 03:39:43.684281 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d490305994566a4eef55f6d5d544fa424409442cd27853b8c6a92bd614115c19" Feb 19 03:39:43.689403 master-0 kubenswrapper[33867]: I0219 03:39:43.689309 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xnwxz" event={"ID":"6bdc624f-2b02-4f65-93e7-49b26b1da384","Type":"ContainerDied","Data":"40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f"} Feb 19 03:39:43.689500 master-0 kubenswrapper[33867]: I0219 03:39:43.689433 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40439491c5940482fb2a09a1a7235f1cad2357b8ea5d69c704a726a68d8d806f" Feb 19 03:39:43.689547 master-0 kubenswrapper[33867]: I0219 03:39:43.689536 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xnwxz" Feb 19 03:39:43.866680 master-0 kubenswrapper[33867]: I0219 03:39:43.865778 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ggcz5"] Feb 19 03:39:43.963161 master-0 kubenswrapper[33867]: I0219 03:39:43.963114 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 03:39:43.967238 master-0 kubenswrapper[33867]: W0219 03:39:43.966627 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaea865d8_841e_4326_9833_ee28b81c18e1.slice/crio-7aaed179977f14a5f847b9e3e3af7e910b361c5d33c0ef1eb3b9dfd6043ca3fc WatchSource:0}: Error finding container 7aaed179977f14a5f847b9e3e3af7e910b361c5d33c0ef1eb3b9dfd6043ca3fc: Status 404 returned error can't find the container with id 7aaed179977f14a5f847b9e3e3af7e910b361c5d33c0ef1eb3b9dfd6043ca3fc Feb 19 03:39:44.725178 master-0 kubenswrapper[33867]: I0219 03:39:44.725112 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"7aaed179977f14a5f847b9e3e3af7e910b361c5d33c0ef1eb3b9dfd6043ca3fc"} Feb 19 03:39:44.726741 master-0 kubenswrapper[33867]: I0219 03:39:44.726688 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ggcz5" event={"ID":"fdb02f35-95af-4c12-b5c6-d936cddcbf51","Type":"ContainerStarted","Data":"77bade944a5651c284a0c90d26c7cecaa332374322da041ee71558a032944673"} Feb 19 03:39:45.739250 master-0 kubenswrapper[33867]: I0219 03:39:45.739182 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"acf89b4dcddd6b3d71956961c0490fab1c5d9162c6e8fcf1f5efc48bae7d5b7c"} Feb 19 03:39:45.739250 master-0 kubenswrapper[33867]: I0219 03:39:45.739244 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"05b43ea9a10cf73dce57382546ca4279b72b13b1a72825c5fe5412d514d5892c"} Feb 19 03:39:46.591478 master-0 kubenswrapper[33867]: I0219 03:39:46.589618 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gclpr"] Feb 19 03:39:46.600527 master-0 kubenswrapper[33867]: I0219 03:39:46.600462 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gclpr"] Feb 19 03:39:46.753999 master-0 kubenswrapper[33867]: I0219 03:39:46.753916 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"22a5d1100ce92487fde490128fca277040dea5aa73c250a614f039e359656ee8"} Feb 19 03:39:46.753999 master-0 kubenswrapper[33867]: I0219 03:39:46.753977 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"f2d6ccfd677f3c62afc110fd8980b3b53d6b6c8a8faca253fded20378f7f6e8b"} Feb 19 03:39:46.973966 master-0 kubenswrapper[33867]: I0219 03:39:46.973874 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9b5c3e-c893-4a3d-bd06-65872e79846f" path="/var/lib/kubelet/pods/cd9b5c3e-c893-4a3d-bd06-65872e79846f/volumes" Feb 19 03:39:47.028189 master-0 kubenswrapper[33867]: I0219 03:39:47.028113 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 03:39:47.770997 master-0 kubenswrapper[33867]: I0219 03:39:47.770936 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"54990bb68ea72ff223f3b0cdf222712f9cbbc5e41ab4d65e886d73ade08c8a12"} Feb 19 03:39:48.789015 master-0 kubenswrapper[33867]: I0219 03:39:48.788881 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"668d0b37802f3bf4c8941008e56c3112b6a267b9da853ec4994dec724f0ce4cc"} Feb 19 03:39:48.789015 master-0 kubenswrapper[33867]: I0219 03:39:48.788967 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"47af08513d5026f1f8f95c9cb11077b0a5ad9016a09a09b107a7de7ef8933359"} Feb 19 03:39:48.789015 master-0 kubenswrapper[33867]: I0219 03:39:48.788979 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"46cf018dd14e7f6a73ea77ae4bb1a114bd3a59345855d319439eb8ffd1c884a2"} Feb 19 03:39:49.806681 master-0 kubenswrapper[33867]: I0219 03:39:49.806623 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"34ab1dd4b2cd43b583efb51c32a56269686395f5fc05e274b7f8bcc104e402fe"} Feb 19 03:39:50.847796 master-0 kubenswrapper[33867]: I0219 03:39:50.847724 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"5cb64d51d3d740a903a38eccd7ed11fa395ee0379878458c9c411b2568f4c4d5"} Feb 19 03:39:50.847796 master-0 kubenswrapper[33867]: I0219 03:39:50.847785 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"469303761cd23976c9ee13e4bdf61a957dc6e4ef0c97ef71ee1f278359f43cbe"} Feb 19 03:39:50.847796 master-0 kubenswrapper[33867]: I0219 03:39:50.847796 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"8cfeb3e22f23d6e34fe15753df77d75a13a900a1e0d670f3b71179309c6eb694"} Feb 19 03:39:50.849700 master-0 kubenswrapper[33867]: I0219 03:39:50.847867 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"186bda89a1e61b5c916a6ed789477b79a880555c38aadb4f074d59fdc3a1463e"} Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: I0219 03:39:51.623237 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j8t8n"] Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: E0219 03:39:51.623894 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9b5c3e-c893-4a3d-bd06-65872e79846f" containerName="mariadb-account-create-update" Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: I0219 03:39:51.623910 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9b5c3e-c893-4a3d-bd06-65872e79846f" containerName="mariadb-account-create-update" Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: E0219 03:39:51.623956 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdc624f-2b02-4f65-93e7-49b26b1da384" containerName="swift-ring-rebalance" Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: I0219 03:39:51.623963 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdc624f-2b02-4f65-93e7-49b26b1da384" containerName="swift-ring-rebalance" Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: I0219 03:39:51.624233 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdc624f-2b02-4f65-93e7-49b26b1da384" containerName="swift-ring-rebalance" Feb 19 03:39:51.625142 master-0 kubenswrapper[33867]: I0219 03:39:51.624330 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9b5c3e-c893-4a3d-bd06-65872e79846f" containerName="mariadb-account-create-update" Feb 19 03:39:51.625796 master-0 kubenswrapper[33867]: I0219 03:39:51.625216 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.629142 master-0 kubenswrapper[33867]: I0219 03:39:51.628812 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 03:39:51.636501 master-0 kubenswrapper[33867]: I0219 03:39:51.636403 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j8t8n"] Feb 19 03:39:51.787922 master-0 kubenswrapper[33867]: I0219 03:39:51.787829 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcfl2\" (UniqueName: \"kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.788188 master-0 kubenswrapper[33867]: I0219 03:39:51.787990 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.867209 master-0 kubenswrapper[33867]: I0219 03:39:51.867138 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"7e8de9a1fce052d0895d6e384e3200ca611ec327e64b4b5479622c17532111d9"} Feb 19 03:39:51.890975 master-0 kubenswrapper[33867]: I0219 03:39:51.890762 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.891348 master-0 kubenswrapper[33867]: I0219 03:39:51.890989 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcfl2\" (UniqueName: \"kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.891816 master-0 kubenswrapper[33867]: I0219 03:39:51.891761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.909634 master-0 kubenswrapper[33867]: I0219 03:39:51.909581 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcfl2\" (UniqueName: \"kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2\") pod \"root-account-create-update-j8t8n\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:51.966201 master-0 kubenswrapper[33867]: I0219 03:39:51.965515 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8t8n" Feb 19 03:39:53.888179 master-0 kubenswrapper[33867]: I0219 03:39:53.888039 33867 generic.go:334] "Generic (PLEG): container finished" podID="9e764204-85e6-4bcf-bdd4-6c24e78d4e3b" containerID="14d227c1daa3a5ad4bc81da11b350b9f4b380df91f4aeea0a0511804b126705b" exitCode=0 Feb 19 03:39:53.888179 master-0 kubenswrapper[33867]: I0219 03:39:53.888103 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b","Type":"ContainerDied","Data":"14d227c1daa3a5ad4bc81da11b350b9f4b380df91f4aeea0a0511804b126705b"} Feb 19 03:39:54.085474 master-0 kubenswrapper[33867]: I0219 03:39:54.085396 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96jnp" podUID="c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd" containerName="ovn-controller" probeResult="failure" output=< Feb 19 03:39:54.085474 master-0 kubenswrapper[33867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 03:39:54.085474 master-0 kubenswrapper[33867]: > Feb 19 03:39:54.129753 master-0 kubenswrapper[33867]: I0219 03:39:54.129690 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:39:54.144695 master-0 kubenswrapper[33867]: I0219 03:39:54.144631 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfn5s" Feb 19 03:39:54.563280 master-0 kubenswrapper[33867]: I0219 03:39:54.562338 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96jnp-config-fdhz9"] Feb 19 03:39:54.567277 master-0 kubenswrapper[33867]: I0219 03:39:54.564451 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.581283 master-0 kubenswrapper[33867]: I0219 03:39:54.570857 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 03:39:54.611767 master-0 kubenswrapper[33867]: I0219 03:39:54.587727 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96jnp-config-fdhz9"] Feb 19 03:39:54.660386 master-0 kubenswrapper[33867]: I0219 03:39:54.660326 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.660842 master-0 kubenswrapper[33867]: I0219 03:39:54.660816 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.661109 master-0 kubenswrapper[33867]: I0219 03:39:54.661033 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrdk\" (UniqueName: \"kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.662552 master-0 kubenswrapper[33867]: I0219 03:39:54.661464 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.662552 master-0 kubenswrapper[33867]: I0219 03:39:54.661536 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.662552 master-0 kubenswrapper[33867]: I0219 03:39:54.661607 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764220 master-0 kubenswrapper[33867]: I0219 03:39:54.763975 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764220 master-0 kubenswrapper[33867]: I0219 03:39:54.764202 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764647 master-0 kubenswrapper[33867]: I0219 03:39:54.764314 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrdk\" (UniqueName: \"kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764856 master-0 kubenswrapper[33867]: I0219 03:39:54.764812 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764915 master-0 kubenswrapper[33867]: I0219 03:39:54.764890 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.764956 master-0 kubenswrapper[33867]: I0219 03:39:54.764894 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.765729 master-0 kubenswrapper[33867]: I0219 03:39:54.765688 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.767980 master-0 kubenswrapper[33867]: I0219 03:39:54.767920 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.768414 master-0 kubenswrapper[33867]: I0219 03:39:54.768397 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.768771 master-0 kubenswrapper[33867]: I0219 03:39:54.768753 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.770569 master-0 kubenswrapper[33867]: I0219 03:39:54.770528 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.785264 master-0 kubenswrapper[33867]: I0219 03:39:54.785178 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrdk\" (UniqueName: \"kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk\") pod \"ovn-controller-96jnp-config-fdhz9\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:54.903296 master-0 kubenswrapper[33867]: I0219 03:39:54.901516 33867 generic.go:334] "Generic (PLEG): container finished" podID="d16fae78-0a83-4085-a9b5-896938c7d1b3" containerID="a244d1d6a2373213aaa4b7248f5173ce1d827aa0cde8130c6cf34da780377cb5" exitCode=0 Feb 19 03:39:54.903296 master-0 kubenswrapper[33867]: I0219 03:39:54.901609 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d16fae78-0a83-4085-a9b5-896938c7d1b3","Type":"ContainerDied","Data":"a244d1d6a2373213aaa4b7248f5173ce1d827aa0cde8130c6cf34da780377cb5"} Feb 19 03:39:54.953481 master-0 kubenswrapper[33867]: I0219 03:39:54.953393 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:39:57.745462 master-0 kubenswrapper[33867]: I0219 03:39:57.744910 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j8t8n"] Feb 19 03:39:57.757988 master-0 kubenswrapper[33867]: W0219 03:39:57.755558 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32f19ad3_7091_420d_8d57_8ee226e6930a.slice/crio-d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483 WatchSource:0}: Error finding container d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483: Status 404 returned error can't find the container with id d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483 Feb 19 03:39:57.850846 master-0 kubenswrapper[33867]: W0219 03:39:57.850779 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb7d698_7d33_497a_9001_863bccf183be.slice/crio-501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53 WatchSource:0}: Error finding container 501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53: Status 404 returned error can't find the container with id 501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53 Feb 19 03:39:57.851554 master-0 kubenswrapper[33867]: I0219 03:39:57.851504 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96jnp-config-fdhz9"] Feb 19 03:39:57.939107 master-0 kubenswrapper[33867]: I0219 03:39:57.939045 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d16fae78-0a83-4085-a9b5-896938c7d1b3","Type":"ContainerStarted","Data":"f651b068d658a10908d8a54b5700c59b2958955e353897d0e052d3eeb2013127"} Feb 19 03:39:57.940640 master-0 kubenswrapper[33867]: I0219 03:39:57.940607 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:39:57.946081 master-0 kubenswrapper[33867]: I0219 03:39:57.944843 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e764204-85e6-4bcf-bdd4-6c24e78d4e3b","Type":"ContainerStarted","Data":"166adef3b44f40768b2c0676a9912490651e59935f85b2506198d8de58ba1aa0"} Feb 19 03:39:57.946081 master-0 kubenswrapper[33867]: I0219 03:39:57.945943 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 03:39:57.947102 master-0 kubenswrapper[33867]: I0219 03:39:57.947058 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96jnp-config-fdhz9" event={"ID":"bcb7d698-7d33-497a-9001-863bccf183be","Type":"ContainerStarted","Data":"501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53"} Feb 19 03:39:57.957682 master-0 kubenswrapper[33867]: I0219 03:39:57.957618 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aea865d8-841e-4326-9833-ee28b81c18e1","Type":"ContainerStarted","Data":"400f8e17e43800cf83043b593a7062d30e919e282d6b37740831dcbfaa0682fb"} Feb 19 03:39:57.963404 master-0 kubenswrapper[33867]: I0219 03:39:57.963344 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8t8n" event={"ID":"32f19ad3-7091-420d-8d57-8ee226e6930a","Type":"ContainerStarted","Data":"d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483"} Feb 19 03:39:57.976312 master-0 kubenswrapper[33867]: I0219 03:39:57.975798 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=61.486069068 podStartE2EDuration="1m10.975782221s" podCreationTimestamp="2026-02-19 03:38:47 +0000 UTC" firstStartedPulling="2026-02-19 03:39:09.850509894 +0000 UTC m=+955.147180505" lastFinishedPulling="2026-02-19 03:39:19.340223047 +0000 UTC m=+964.636893658" observedRunningTime="2026-02-19 03:39:57.966899769 +0000 UTC m=+1003.263570400" watchObservedRunningTime="2026-02-19 03:39:57.975782221 +0000 UTC m=+1003.272452832" Feb 19 03:39:58.017785 master-0 kubenswrapper[33867]: I0219 03:39:58.017677 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.770338737 podStartE2EDuration="1m11.017652266s" podCreationTimestamp="2026-02-19 03:38:47 +0000 UTC" firstStartedPulling="2026-02-19 03:39:09.850965957 +0000 UTC m=+955.147636568" lastFinishedPulling="2026-02-19 03:39:19.098279476 +0000 UTC m=+964.394950097" observedRunningTime="2026-02-19 03:39:58.003337491 +0000 UTC m=+1003.300008102" watchObservedRunningTime="2026-02-19 03:39:58.017652266 +0000 UTC m=+1003.314322877" Feb 19 03:39:58.056116 master-0 kubenswrapper[33867]: I0219 03:39:58.056002 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=27.551425104 podStartE2EDuration="33.055976181s" podCreationTimestamp="2026-02-19 03:39:25 +0000 UTC" firstStartedPulling="2026-02-19 03:39:43.969652659 +0000 UTC m=+989.266323270" lastFinishedPulling="2026-02-19 03:39:49.474203716 +0000 UTC m=+994.770874347" observedRunningTime="2026-02-19 03:39:58.048831379 +0000 UTC m=+1003.345502010" watchObservedRunningTime="2026-02-19 03:39:58.055976181 +0000 UTC m=+1003.352646792" Feb 19 03:39:58.094237 master-0 kubenswrapper[33867]: I0219 03:39:58.094138 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j8t8n" podStartSLOduration=7.094110061 podStartE2EDuration="7.094110061s" podCreationTimestamp="2026-02-19 03:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:39:58.072647263 +0000 UTC m=+1003.369317894" watchObservedRunningTime="2026-02-19 03:39:58.094110061 +0000 UTC m=+1003.390780672" Feb 19 03:39:58.377970 master-0 kubenswrapper[33867]: I0219 03:39:58.377911 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:39:58.380276 master-0 kubenswrapper[33867]: I0219 03:39:58.380221 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.384696 master-0 kubenswrapper[33867]: I0219 03:39:58.383378 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 03:39:58.394900 master-0 kubenswrapper[33867]: I0219 03:39:58.394853 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:39:58.557779 master-0 kubenswrapper[33867]: I0219 03:39:58.557729 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdqq\" (UniqueName: \"kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.558138 master-0 kubenswrapper[33867]: I0219 03:39:58.558111 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.558338 master-0 kubenswrapper[33867]: I0219 03:39:58.558316 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.558464 master-0 kubenswrapper[33867]: I0219 03:39:58.558448 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.558661 master-0 kubenswrapper[33867]: I0219 03:39:58.558602 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.558785 master-0 kubenswrapper[33867]: I0219 03:39:58.558768 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.661829 master-0 kubenswrapper[33867]: I0219 03:39:58.661770 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.661829 master-0 kubenswrapper[33867]: I0219 03:39:58.661835 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.662086 master-0 kubenswrapper[33867]: I0219 03:39:58.661900 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.662086 master-0 kubenswrapper[33867]: I0219 03:39:58.661947 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.662086 master-0 kubenswrapper[33867]: I0219 03:39:58.662021 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdqq\" (UniqueName: \"kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.662183 master-0 kubenswrapper[33867]: I0219 03:39:58.662086 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.663340 master-0 kubenswrapper[33867]: I0219 03:39:58.663305 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.663414 master-0 kubenswrapper[33867]: I0219 03:39:58.663365 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.663666 master-0 kubenswrapper[33867]: I0219 03:39:58.663628 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.663796 master-0 kubenswrapper[33867]: I0219 03:39:58.663766 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.664148 master-0 kubenswrapper[33867]: I0219 03:39:58.664099 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.683134 master-0 kubenswrapper[33867]: I0219 03:39:58.683079 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdqq\" (UniqueName: \"kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq\") pod \"dnsmasq-dns-6d675d55f5-6zr5n\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.700785 master-0 kubenswrapper[33867]: I0219 03:39:58.700728 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:39:58.989213 master-0 kubenswrapper[33867]: I0219 03:39:58.989161 33867 generic.go:334] "Generic (PLEG): container finished" podID="32f19ad3-7091-420d-8d57-8ee226e6930a" containerID="a92202a91b03132c07cbb5e8bb6ff218814869f12b2b03f84bc9a7348fdb4e71" exitCode=0 Feb 19 03:39:58.989812 master-0 kubenswrapper[33867]: I0219 03:39:58.989228 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8t8n" event={"ID":"32f19ad3-7091-420d-8d57-8ee226e6930a","Type":"ContainerDied","Data":"a92202a91b03132c07cbb5e8bb6ff218814869f12b2b03f84bc9a7348fdb4e71"} Feb 19 03:39:58.991760 master-0 kubenswrapper[33867]: I0219 03:39:58.991735 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ggcz5" event={"ID":"fdb02f35-95af-4c12-b5c6-d936cddcbf51","Type":"ContainerStarted","Data":"8a7125323eb472c003a3405cbff1282d27339bd6440c527e9ed93cff3a27a964"} Feb 19 03:39:58.994983 master-0 kubenswrapper[33867]: I0219 03:39:58.994858 33867 generic.go:334] "Generic (PLEG): container finished" podID="bcb7d698-7d33-497a-9001-863bccf183be" containerID="a3befb830c3ff3540a81d0b4338b0976abb156b99b500bafa91a08e94f701314" exitCode=0 Feb 19 03:39:58.996408 master-0 kubenswrapper[33867]: I0219 03:39:58.996354 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96jnp-config-fdhz9" event={"ID":"bcb7d698-7d33-497a-9001-863bccf183be","Type":"ContainerDied","Data":"a3befb830c3ff3540a81d0b4338b0976abb156b99b500bafa91a08e94f701314"} Feb 19 03:39:59.081459 master-0 kubenswrapper[33867]: I0219 03:39:59.077752 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ggcz5" podStartSLOduration=3.562829148 podStartE2EDuration="17.077732652s" podCreationTimestamp="2026-02-19 03:39:42 +0000 UTC" firstStartedPulling="2026-02-19 03:39:43.868650149 +0000 UTC m=+989.165320760" lastFinishedPulling="2026-02-19 03:39:57.383553653 +0000 UTC m=+1002.680224264" observedRunningTime="2026-02-19 03:39:59.062662085 +0000 UTC m=+1004.359332696" watchObservedRunningTime="2026-02-19 03:39:59.077732652 +0000 UTC m=+1004.374403263" Feb 19 03:39:59.193404 master-0 kubenswrapper[33867]: I0219 03:39:59.193323 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-96jnp" Feb 19 03:39:59.214072 master-0 kubenswrapper[33867]: I0219 03:39:59.213957 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:40:00.022306 master-0 kubenswrapper[33867]: I0219 03:40:00.022197 33867 generic.go:334] "Generic (PLEG): container finished" podID="59417827-7e90-4411-aec0-d15e031ea00b" containerID="f2a6852e16f3a6c977ee1b8d34d089e9c09c4cfb2caf517ec78b86baa9d65c13" exitCode=0 Feb 19 03:40:00.023044 master-0 kubenswrapper[33867]: I0219 03:40:00.022986 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" event={"ID":"59417827-7e90-4411-aec0-d15e031ea00b","Type":"ContainerDied","Data":"f2a6852e16f3a6c977ee1b8d34d089e9c09c4cfb2caf517ec78b86baa9d65c13"} Feb 19 03:40:00.023130 master-0 kubenswrapper[33867]: I0219 03:40:00.023059 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" event={"ID":"59417827-7e90-4411-aec0-d15e031ea00b","Type":"ContainerStarted","Data":"95719529cfe4f3e191d7f7d4acdde41b3cff425d31bcfeb0a4b5e9e3d786355a"} Feb 19 03:40:00.562214 master-0 kubenswrapper[33867]: I0219 03:40:00.562159 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8t8n" Feb 19 03:40:00.572165 master-0 kubenswrapper[33867]: I0219 03:40:00.572088 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:40:00.628489 master-0 kubenswrapper[33867]: I0219 03:40:00.628414 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcfl2\" (UniqueName: \"kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2\") pod \"32f19ad3-7091-420d-8d57-8ee226e6930a\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " Feb 19 03:40:00.628720 master-0 kubenswrapper[33867]: I0219 03:40:00.628533 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.628720 master-0 kubenswrapper[33867]: I0219 03:40:00.628596 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.628720 master-0 kubenswrapper[33867]: I0219 03:40:00.628655 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts\") pod \"32f19ad3-7091-420d-8d57-8ee226e6930a\" (UID: \"32f19ad3-7091-420d-8d57-8ee226e6930a\") " Feb 19 03:40:00.628879 master-0 kubenswrapper[33867]: I0219 03:40:00.628759 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrdk\" (UniqueName: \"kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.628879 master-0 kubenswrapper[33867]: I0219 03:40:00.628799 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.628879 master-0 kubenswrapper[33867]: I0219 03:40:00.628828 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.629017 master-0 kubenswrapper[33867]: I0219 03:40:00.628891 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn\") pod \"bcb7d698-7d33-497a-9001-863bccf183be\" (UID: \"bcb7d698-7d33-497a-9001-863bccf183be\") " Feb 19 03:40:00.629573 master-0 kubenswrapper[33867]: I0219 03:40:00.629533 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:40:00.630499 master-0 kubenswrapper[33867]: I0219 03:40:00.630459 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32f19ad3-7091-420d-8d57-8ee226e6930a" (UID: "32f19ad3-7091-420d-8d57-8ee226e6930a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:00.632647 master-0 kubenswrapper[33867]: I0219 03:40:00.632606 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts" (OuterVolumeSpecName: "scripts") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:00.632761 master-0 kubenswrapper[33867]: I0219 03:40:00.632661 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:40:00.633465 master-0 kubenswrapper[33867]: I0219 03:40:00.633427 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:00.637325 master-0 kubenswrapper[33867]: I0219 03:40:00.636491 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk" (OuterVolumeSpecName: "kube-api-access-dqrdk") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "kube-api-access-dqrdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:00.637325 master-0 kubenswrapper[33867]: I0219 03:40:00.636545 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run" (OuterVolumeSpecName: "var-run") pod "bcb7d698-7d33-497a-9001-863bccf183be" (UID: "bcb7d698-7d33-497a-9001-863bccf183be"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:40:00.640710 master-0 kubenswrapper[33867]: I0219 03:40:00.639652 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2" (OuterVolumeSpecName: "kube-api-access-xcfl2") pod "32f19ad3-7091-420d-8d57-8ee226e6930a" (UID: "32f19ad3-7091-420d-8d57-8ee226e6930a"). InnerVolumeSpecName "kube-api-access-xcfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:00.731355 master-0 kubenswrapper[33867]: I0219 03:40:00.731282 33867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731355 master-0 kubenswrapper[33867]: I0219 03:40:00.731344 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcfl2\" (UniqueName: \"kubernetes.io/projected/32f19ad3-7091-420d-8d57-8ee226e6930a-kube-api-access-xcfl2\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731355 master-0 kubenswrapper[33867]: I0219 03:40:00.731365 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731683 master-0 kubenswrapper[33867]: I0219 03:40:00.731381 33867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731683 master-0 kubenswrapper[33867]: I0219 03:40:00.731395 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32f19ad3-7091-420d-8d57-8ee226e6930a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731683 master-0 kubenswrapper[33867]: I0219 03:40:00.731408 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrdk\" (UniqueName: \"kubernetes.io/projected/bcb7d698-7d33-497a-9001-863bccf183be-kube-api-access-dqrdk\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731683 master-0 kubenswrapper[33867]: I0219 03:40:00.731419 33867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb7d698-7d33-497a-9001-863bccf183be-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:00.731683 master-0 kubenswrapper[33867]: I0219 03:40:00.731431 33867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcb7d698-7d33-497a-9001-863bccf183be-var-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:01.036702 master-0 kubenswrapper[33867]: I0219 03:40:01.036630 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8t8n" Feb 19 03:40:01.037509 master-0 kubenswrapper[33867]: I0219 03:40:01.037128 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8t8n" event={"ID":"32f19ad3-7091-420d-8d57-8ee226e6930a","Type":"ContainerDied","Data":"d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483"} Feb 19 03:40:01.037509 master-0 kubenswrapper[33867]: I0219 03:40:01.037188 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2086e2219bd6e279ec397b63f05c0aaa41626759860ddd7b4705c5be5336483" Feb 19 03:40:01.040228 master-0 kubenswrapper[33867]: I0219 03:40:01.040173 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96jnp-config-fdhz9" Feb 19 03:40:01.040763 master-0 kubenswrapper[33867]: I0219 03:40:01.040685 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96jnp-config-fdhz9" event={"ID":"bcb7d698-7d33-497a-9001-863bccf183be","Type":"ContainerDied","Data":"501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53"} Feb 19 03:40:01.040763 master-0 kubenswrapper[33867]: I0219 03:40:01.040732 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501267bf05893fe662765ac764d1cee1bc277a7bdcc20a5e9b3f2bc04d530a53" Feb 19 03:40:01.043038 master-0 kubenswrapper[33867]: I0219 03:40:01.043010 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" event={"ID":"59417827-7e90-4411-aec0-d15e031ea00b","Type":"ContainerStarted","Data":"a036d8828b611a616689f4e8701e33bdeaf029edde132c682f779dcca94b52ee"} Feb 19 03:40:01.043367 master-0 kubenswrapper[33867]: I0219 03:40:01.043321 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:40:01.084587 master-0 kubenswrapper[33867]: I0219 03:40:01.084490 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" podStartSLOduration=3.08446292 podStartE2EDuration="3.08446292s" podCreationTimestamp="2026-02-19 03:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:01.073588782 +0000 UTC m=+1006.370259393" watchObservedRunningTime="2026-02-19 03:40:01.08446292 +0000 UTC m=+1006.381133531" Feb 19 03:40:02.292754 master-0 kubenswrapper[33867]: I0219 03:40:02.292657 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96jnp-config-fdhz9"] Feb 19 03:40:02.300939 master-0 kubenswrapper[33867]: I0219 03:40:02.300850 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96jnp-config-fdhz9"] Feb 19 03:40:02.967693 master-0 kubenswrapper[33867]: I0219 03:40:02.967626 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb7d698-7d33-497a-9001-863bccf183be" path="/var/lib/kubelet/pods/bcb7d698-7d33-497a-9001-863bccf183be/volumes" Feb 19 03:40:07.105794 master-0 kubenswrapper[33867]: I0219 03:40:07.105746 33867 generic.go:334] "Generic (PLEG): container finished" podID="fdb02f35-95af-4c12-b5c6-d936cddcbf51" containerID="8a7125323eb472c003a3405cbff1282d27339bd6440c527e9ed93cff3a27a964" exitCode=0 Feb 19 03:40:07.105794 master-0 kubenswrapper[33867]: I0219 03:40:07.105793 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ggcz5" event={"ID":"fdb02f35-95af-4c12-b5c6-d936cddcbf51","Type":"ContainerDied","Data":"8a7125323eb472c003a3405cbff1282d27339bd6440c527e9ed93cff3a27a964"} Feb 19 03:40:08.679580 master-0 kubenswrapper[33867]: I0219 03:40:08.679515 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ggcz5" Feb 19 03:40:08.702412 master-0 kubenswrapper[33867]: I0219 03:40:08.702335 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:40:08.725148 master-0 kubenswrapper[33867]: I0219 03:40:08.725063 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzr9r\" (UniqueName: \"kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r\") pod \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " Feb 19 03:40:08.725438 master-0 kubenswrapper[33867]: I0219 03:40:08.725287 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data\") pod \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " Feb 19 03:40:08.725438 master-0 kubenswrapper[33867]: I0219 03:40:08.725322 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle\") pod \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " Feb 19 03:40:08.725438 master-0 kubenswrapper[33867]: I0219 03:40:08.725413 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data\") pod \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\" (UID: \"fdb02f35-95af-4c12-b5c6-d936cddcbf51\") " Feb 19 03:40:08.730476 master-0 kubenswrapper[33867]: I0219 03:40:08.730422 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fdb02f35-95af-4c12-b5c6-d936cddcbf51" (UID: "fdb02f35-95af-4c12-b5c6-d936cddcbf51"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:08.757527 master-0 kubenswrapper[33867]: I0219 03:40:08.753432 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r" (OuterVolumeSpecName: "kube-api-access-rzr9r") pod "fdb02f35-95af-4c12-b5c6-d936cddcbf51" (UID: "fdb02f35-95af-4c12-b5c6-d936cddcbf51"). InnerVolumeSpecName "kube-api-access-rzr9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:08.800793 master-0 kubenswrapper[33867]: I0219 03:40:08.800000 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 03:40:08.806506 master-0 kubenswrapper[33867]: I0219 03:40:08.804694 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdb02f35-95af-4c12-b5c6-d936cddcbf51" (UID: "fdb02f35-95af-4c12-b5c6-d936cddcbf51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:08.812429 master-0 kubenswrapper[33867]: I0219 03:40:08.812359 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:40:08.812661 master-0 kubenswrapper[33867]: I0219 03:40:08.812621 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="dnsmasq-dns" containerID="cri-o://747861d5eb3cc4c6bd54d5a5145842ab4d375cd25b552340595eec0d2be13ebc" gracePeriod=10 Feb 19 03:40:08.828388 master-0 kubenswrapper[33867]: I0219 03:40:08.828216 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data" (OuterVolumeSpecName: "config-data") pod "fdb02f35-95af-4c12-b5c6-d936cddcbf51" (UID: "fdb02f35-95af-4c12-b5c6-d936cddcbf51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:08.829062 master-0 kubenswrapper[33867]: I0219 03:40:08.828893 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzr9r\" (UniqueName: \"kubernetes.io/projected/fdb02f35-95af-4c12-b5c6-d936cddcbf51-kube-api-access-rzr9r\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:08.829062 master-0 kubenswrapper[33867]: I0219 03:40:08.828927 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:08.829062 master-0 kubenswrapper[33867]: I0219 03:40:08.828936 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:08.829062 master-0 kubenswrapper[33867]: I0219 03:40:08.828947 33867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdb02f35-95af-4c12-b5c6-d936cddcbf51-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.153867 master-0 kubenswrapper[33867]: I0219 03:40:09.153210 33867 generic.go:334] "Generic (PLEG): container finished" podID="bd02c363-1edd-4046-b242-331863944386" containerID="747861d5eb3cc4c6bd54d5a5145842ab4d375cd25b552340595eec0d2be13ebc" exitCode=0 Feb 19 03:40:09.153867 master-0 kubenswrapper[33867]: I0219 03:40:09.153370 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" event={"ID":"bd02c363-1edd-4046-b242-331863944386","Type":"ContainerDied","Data":"747861d5eb3cc4c6bd54d5a5145842ab4d375cd25b552340595eec0d2be13ebc"} Feb 19 03:40:09.159862 master-0 kubenswrapper[33867]: I0219 03:40:09.159802 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ggcz5" Feb 19 03:40:09.162829 master-0 kubenswrapper[33867]: I0219 03:40:09.156654 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ggcz5" event={"ID":"fdb02f35-95af-4c12-b5c6-d936cddcbf51","Type":"ContainerDied","Data":"77bade944a5651c284a0c90d26c7cecaa332374322da041ee71558a032944673"} Feb 19 03:40:09.163442 master-0 kubenswrapper[33867]: I0219 03:40:09.162865 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77bade944a5651c284a0c90d26c7cecaa332374322da041ee71558a032944673" Feb 19 03:40:09.425063 master-0 kubenswrapper[33867]: I0219 03:40:09.424727 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:40:09.468458 master-0 kubenswrapper[33867]: I0219 03:40:09.467057 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config\") pod \"bd02c363-1edd-4046-b242-331863944386\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " Feb 19 03:40:09.468458 master-0 kubenswrapper[33867]: I0219 03:40:09.467157 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6wxm\" (UniqueName: \"kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm\") pod \"bd02c363-1edd-4046-b242-331863944386\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " Feb 19 03:40:09.468458 master-0 kubenswrapper[33867]: I0219 03:40:09.467197 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb\") pod \"bd02c363-1edd-4046-b242-331863944386\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " Feb 19 03:40:09.468458 master-0 kubenswrapper[33867]: I0219 03:40:09.467453 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc\") pod \"bd02c363-1edd-4046-b242-331863944386\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " Feb 19 03:40:09.468458 master-0 kubenswrapper[33867]: I0219 03:40:09.467495 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb\") pod \"bd02c363-1edd-4046-b242-331863944386\" (UID: \"bd02c363-1edd-4046-b242-331863944386\") " Feb 19 03:40:09.472522 master-0 kubenswrapper[33867]: I0219 03:40:09.472455 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm" (OuterVolumeSpecName: "kube-api-access-v6wxm") pod "bd02c363-1edd-4046-b242-331863944386" (UID: "bd02c363-1edd-4046-b242-331863944386"). InnerVolumeSpecName "kube-api-access-v6wxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:09.526073 master-0 kubenswrapper[33867]: I0219 03:40:09.526009 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd02c363-1edd-4046-b242-331863944386" (UID: "bd02c363-1edd-4046-b242-331863944386"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:09.532395 master-0 kubenswrapper[33867]: I0219 03:40:09.532347 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd02c363-1edd-4046-b242-331863944386" (UID: "bd02c363-1edd-4046-b242-331863944386"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:09.535834 master-0 kubenswrapper[33867]: I0219 03:40:09.535782 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd02c363-1edd-4046-b242-331863944386" (UID: "bd02c363-1edd-4046-b242-331863944386"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:09.552007 master-0 kubenswrapper[33867]: I0219 03:40:09.551936 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config" (OuterVolumeSpecName: "config") pod "bd02c363-1edd-4046-b242-331863944386" (UID: "bd02c363-1edd-4046-b242-331863944386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:09.573272 master-0 kubenswrapper[33867]: I0219 03:40:09.573157 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.573272 master-0 kubenswrapper[33867]: I0219 03:40:09.573231 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6wxm\" (UniqueName: \"kubernetes.io/projected/bd02c363-1edd-4046-b242-331863944386-kube-api-access-v6wxm\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.573272 master-0 kubenswrapper[33867]: I0219 03:40:09.573271 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.573272 master-0 kubenswrapper[33867]: I0219 03:40:09.573288 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.573647 master-0 kubenswrapper[33867]: I0219 03:40:09.573304 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd02c363-1edd-4046-b242-331863944386-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.924848 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: E0219 03:40:09.933271 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb02f35-95af-4c12-b5c6-d936cddcbf51" containerName="glance-db-sync" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933322 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb02f35-95af-4c12-b5c6-d936cddcbf51" containerName="glance-db-sync" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: E0219 03:40:09.933350 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb7d698-7d33-497a-9001-863bccf183be" containerName="ovn-config" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933356 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb7d698-7d33-497a-9001-863bccf183be" containerName="ovn-config" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: E0219 03:40:09.933396 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="dnsmasq-dns" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933406 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="dnsmasq-dns" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: E0219 03:40:09.933462 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="init" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933470 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="init" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: E0219 03:40:09.933484 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f19ad3-7091-420d-8d57-8ee226e6930a" containerName="mariadb-account-create-update" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933491 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f19ad3-7091-420d-8d57-8ee226e6930a" containerName="mariadb-account-create-update" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933819 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb02f35-95af-4c12-b5c6-d936cddcbf51" containerName="glance-db-sync" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933855 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f19ad3-7091-420d-8d57-8ee226e6930a" containerName="mariadb-account-create-update" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933869 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd02c363-1edd-4046-b242-331863944386" containerName="dnsmasq-dns" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.933890 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb7d698-7d33-497a-9001-863bccf183be" containerName="ovn-config" Feb 19 03:40:09.935668 master-0 kubenswrapper[33867]: I0219 03:40:09.935537 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:09.941285 master-0 kubenswrapper[33867]: I0219 03:40:09.941210 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:10.005009 master-0 kubenswrapper[33867]: I0219 03:40:10.004933 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.005292 master-0 kubenswrapper[33867]: I0219 03:40:10.005030 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.005292 master-0 kubenswrapper[33867]: I0219 03:40:10.005096 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.005292 master-0 kubenswrapper[33867]: I0219 03:40:10.005236 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.005455 master-0 kubenswrapper[33867]: I0219 03:40:10.005349 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr5v\" (UniqueName: \"kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.005455 master-0 kubenswrapper[33867]: I0219 03:40:10.005375 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108153 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzr5v\" (UniqueName: \"kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108218 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108601 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108651 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108741 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.108880 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.109241 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.109845 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.110007 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.110118 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.111346 master-0 kubenswrapper[33867]: I0219 03:40:10.110746 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.131448 master-0 kubenswrapper[33867]: I0219 03:40:10.125626 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzr5v\" (UniqueName: \"kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v\") pod \"dnsmasq-dns-9bb676bc9-rr48p\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.178702 master-0 kubenswrapper[33867]: I0219 03:40:10.178619 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" event={"ID":"bd02c363-1edd-4046-b242-331863944386","Type":"ContainerDied","Data":"5d0c9e58262f93022ba33320d7d7cd4426dcb2649bc45a772080e006417c33a7"} Feb 19 03:40:10.178702 master-0 kubenswrapper[33867]: I0219 03:40:10.178706 33867 scope.go:117] "RemoveContainer" containerID="747861d5eb3cc4c6bd54d5a5145842ab4d375cd25b552340595eec0d2be13ebc" Feb 19 03:40:10.178982 master-0 kubenswrapper[33867]: I0219 03:40:10.178721 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-7zmsl" Feb 19 03:40:10.205364 master-0 kubenswrapper[33867]: I0219 03:40:10.205326 33867 scope.go:117] "RemoveContainer" containerID="d756b0f46b6c19317b65eefd780b8236cdc5886ea324749e1c60f8bb385a1144" Feb 19 03:40:10.232464 master-0 kubenswrapper[33867]: I0219 03:40:10.232348 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:40:10.245527 master-0 kubenswrapper[33867]: I0219 03:40:10.245181 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-7zmsl"] Feb 19 03:40:10.288122 master-0 kubenswrapper[33867]: I0219 03:40:10.288020 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:10.782760 master-0 kubenswrapper[33867]: I0219 03:40:10.782698 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:10.975618 master-0 kubenswrapper[33867]: I0219 03:40:10.974622 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd02c363-1edd-4046-b242-331863944386" path="/var/lib/kubelet/pods/bd02c363-1edd-4046-b242-331863944386/volumes" Feb 19 03:40:11.193204 master-0 kubenswrapper[33867]: I0219 03:40:11.193126 33867 generic.go:334] "Generic (PLEG): container finished" podID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerID="880c1f22fc7be92cdd44ae4a3742c7896ff0d350c063a16e68f6697282b2e85f" exitCode=0 Feb 19 03:40:11.193597 master-0 kubenswrapper[33867]: I0219 03:40:11.193226 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" event={"ID":"4b16754f-37e1-41d0-842a-05b2360ea3f9","Type":"ContainerDied","Data":"880c1f22fc7be92cdd44ae4a3742c7896ff0d350c063a16e68f6697282b2e85f"} Feb 19 03:40:11.193597 master-0 kubenswrapper[33867]: I0219 03:40:11.193286 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" event={"ID":"4b16754f-37e1-41d0-842a-05b2360ea3f9","Type":"ContainerStarted","Data":"c0765fddd767e3f56f4825b2f95a6a7a4d9a76a7a3894cb6f5a6c355749c0a0c"} Feb 19 03:40:12.213994 master-0 kubenswrapper[33867]: I0219 03:40:12.213932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" event={"ID":"4b16754f-37e1-41d0-842a-05b2360ea3f9","Type":"ContainerStarted","Data":"30be8ded34fe08ac229762a1d55e716fcd25b02275e2331e3f6a9f4e5494377c"} Feb 19 03:40:12.214832 master-0 kubenswrapper[33867]: I0219 03:40:12.214113 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:12.252743 master-0 kubenswrapper[33867]: I0219 03:40:12.252381 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" podStartSLOduration=3.25235731 podStartE2EDuration="3.25235731s" podCreationTimestamp="2026-02-19 03:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:12.244724274 +0000 UTC m=+1017.541394885" watchObservedRunningTime="2026-02-19 03:40:12.25235731 +0000 UTC m=+1017.549027931" Feb 19 03:40:13.987996 master-0 kubenswrapper[33867]: I0219 03:40:13.987515 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 03:40:14.410789 master-0 kubenswrapper[33867]: I0219 03:40:14.410650 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f8sf9"] Feb 19 03:40:14.421461 master-0 kubenswrapper[33867]: I0219 03:40:14.421375 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.438106 master-0 kubenswrapper[33867]: I0219 03:40:14.436281 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8sf9"] Feb 19 03:40:14.525861 master-0 kubenswrapper[33867]: I0219 03:40:14.525632 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.526181 master-0 kubenswrapper[33867]: I0219 03:40:14.525989 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htm2t\" (UniqueName: \"kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.601238 master-0 kubenswrapper[33867]: I0219 03:40:14.600388 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dcdf-account-create-update-5j6ts"] Feb 19 03:40:14.605307 master-0 kubenswrapper[33867]: I0219 03:40:14.602841 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.612294 master-0 kubenswrapper[33867]: I0219 03:40:14.607097 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 03:40:14.643314 master-0 kubenswrapper[33867]: I0219 03:40:14.629590 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.643314 master-0 kubenswrapper[33867]: I0219 03:40:14.629827 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.643314 master-0 kubenswrapper[33867]: I0219 03:40:14.629904 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmx6\" (UniqueName: \"kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.643314 master-0 kubenswrapper[33867]: I0219 03:40:14.629925 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htm2t\" (UniqueName: \"kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.643314 master-0 kubenswrapper[33867]: I0219 03:40:14.642447 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.660352 master-0 kubenswrapper[33867]: I0219 03:40:14.660282 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dcdf-account-create-update-5j6ts"] Feb 19 03:40:14.668293 master-0 kubenswrapper[33867]: I0219 03:40:14.664953 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htm2t\" (UniqueName: \"kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t\") pod \"cinder-db-create-f8sf9\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.761235 master-0 kubenswrapper[33867]: I0219 03:40:14.759804 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.761235 master-0 kubenswrapper[33867]: I0219 03:40:14.758604 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.761235 master-0 kubenswrapper[33867]: I0219 03:40:14.760187 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwmx6\" (UniqueName: \"kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.779508 master-0 kubenswrapper[33867]: I0219 03:40:14.777493 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-scqnr"] Feb 19 03:40:14.779830 master-0 kubenswrapper[33867]: I0219 03:40:14.779644 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.788936 master-0 kubenswrapper[33867]: I0219 03:40:14.788882 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwmx6\" (UniqueName: \"kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6\") pod \"cinder-dcdf-account-create-update-5j6ts\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:14.809312 master-0 kubenswrapper[33867]: I0219 03:40:14.809065 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-scqnr"] Feb 19 03:40:14.860583 master-0 kubenswrapper[33867]: I0219 03:40:14.860458 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:14.862684 master-0 kubenswrapper[33867]: I0219 03:40:14.862225 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.862684 master-0 kubenswrapper[33867]: I0219 03:40:14.862442 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqsw\" (UniqueName: \"kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.976798 master-0 kubenswrapper[33867]: I0219 03:40:14.975315 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.977232 master-0 kubenswrapper[33867]: I0219 03:40:14.977090 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f7f8-account-create-update-r5x64"] Feb 19 03:40:14.977922 master-0 kubenswrapper[33867]: I0219 03:40:14.977445 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.977922 master-0 kubenswrapper[33867]: I0219 03:40:14.977567 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqsw\" (UniqueName: \"kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:14.996331 master-0 kubenswrapper[33867]: I0219 03:40:14.982986 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:14.997005 master-0 kubenswrapper[33867]: I0219 03:40:14.996958 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 03:40:15.017286 master-0 kubenswrapper[33867]: I0219 03:40:15.012477 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f7f8-account-create-update-r5x64"] Feb 19 03:40:15.024777 master-0 kubenswrapper[33867]: I0219 03:40:15.024726 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqsw\" (UniqueName: \"kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw\") pod \"neutron-db-create-scqnr\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:15.037589 master-0 kubenswrapper[33867]: I0219 03:40:15.037393 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ctljd"] Feb 19 03:40:15.043687 master-0 kubenswrapper[33867]: I0219 03:40:15.039336 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.095602 master-0 kubenswrapper[33867]: I0219 03:40:15.092845 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:15.102508 master-0 kubenswrapper[33867]: I0219 03:40:15.101153 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 03:40:15.111810 master-0 kubenswrapper[33867]: I0219 03:40:15.108098 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.111810 master-0 kubenswrapper[33867]: I0219 03:40:15.108218 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96s4\" (UniqueName: \"kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.111810 master-0 kubenswrapper[33867]: I0219 03:40:15.108381 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 03:40:15.111810 master-0 kubenswrapper[33867]: I0219 03:40:15.108872 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 03:40:15.153911 master-0 kubenswrapper[33867]: I0219 03:40:15.149692 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ctljd"] Feb 19 03:40:15.153911 master-0 kubenswrapper[33867]: I0219 03:40:15.153279 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:15.238518 master-0 kubenswrapper[33867]: I0219 03:40:15.238428 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.238919 master-0 kubenswrapper[33867]: I0219 03:40:15.238599 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.243963 master-0 kubenswrapper[33867]: I0219 03:40:15.241810 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96s4\" (UniqueName: \"kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.243963 master-0 kubenswrapper[33867]: I0219 03:40:15.241898 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.243963 master-0 kubenswrapper[33867]: I0219 03:40:15.242207 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.243963 master-0 kubenswrapper[33867]: I0219 03:40:15.242472 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jqw\" (UniqueName: \"kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.261724 master-0 kubenswrapper[33867]: I0219 03:40:15.261651 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96s4\" (UniqueName: \"kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4\") pod \"neutron-f7f8-account-create-update-r5x64\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.359150 master-0 kubenswrapper[33867]: I0219 03:40:15.359083 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.359306 master-0 kubenswrapper[33867]: I0219 03:40:15.359275 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jqw\" (UniqueName: \"kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.359645 master-0 kubenswrapper[33867]: I0219 03:40:15.359617 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.371892 master-0 kubenswrapper[33867]: I0219 03:40:15.371820 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.374149 master-0 kubenswrapper[33867]: I0219 03:40:15.374069 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.381373 master-0 kubenswrapper[33867]: I0219 03:40:15.381311 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jqw\" (UniqueName: \"kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw\") pod \"keystone-db-sync-ctljd\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.412310 master-0 kubenswrapper[33867]: I0219 03:40:15.412240 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:15.450987 master-0 kubenswrapper[33867]: I0219 03:40:15.450913 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:15.481252 master-0 kubenswrapper[33867]: I0219 03:40:15.481148 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8sf9"] Feb 19 03:40:15.757182 master-0 kubenswrapper[33867]: I0219 03:40:15.735210 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dcdf-account-create-update-5j6ts"] Feb 19 03:40:15.757182 master-0 kubenswrapper[33867]: I0219 03:40:15.754881 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 03:40:15.918441 master-0 kubenswrapper[33867]: I0219 03:40:15.918199 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-scqnr"] Feb 19 03:40:15.923771 master-0 kubenswrapper[33867]: W0219 03:40:15.923698 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f772a6_d473_476d_bd72_4af600e017bf.slice/crio-6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e WatchSource:0}: Error finding container 6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e: Status 404 returned error can't find the container with id 6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e Feb 19 03:40:16.169611 master-0 kubenswrapper[33867]: W0219 03:40:16.169532 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod815c75b3_ac10_40a0_8467_8a168c2ff550.slice/crio-c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137 WatchSource:0}: Error finding container c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137: Status 404 returned error can't find the container with id c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137 Feb 19 03:40:16.182766 master-0 kubenswrapper[33867]: W0219 03:40:16.182720 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a336761_686b_44e6_b441_b76aebf36dba.slice/crio-b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60 WatchSource:0}: Error finding container b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60: Status 404 returned error can't find the container with id b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60 Feb 19 03:40:16.187341 master-0 kubenswrapper[33867]: I0219 03:40:16.187246 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f7f8-account-create-update-r5x64"] Feb 19 03:40:16.204490 master-0 kubenswrapper[33867]: I0219 03:40:16.202757 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ctljd"] Feb 19 03:40:16.293930 master-0 kubenswrapper[33867]: I0219 03:40:16.293857 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dcdf-account-create-update-5j6ts" event={"ID":"a5af8566-34bb-49b8-821d-8b3c4d1aeb21","Type":"ContainerStarted","Data":"dccf318d2ba35240729b2ebea5a3fe06c080e75ca80ff6c38921fb581d6a2b20"} Feb 19 03:40:16.294067 master-0 kubenswrapper[33867]: I0219 03:40:16.293946 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dcdf-account-create-update-5j6ts" event={"ID":"a5af8566-34bb-49b8-821d-8b3c4d1aeb21","Type":"ContainerStarted","Data":"236bd9e48aa0c4d365035ccd11f8de86913e743ace7feda68d6c96bfcb77e939"} Feb 19 03:40:16.296988 master-0 kubenswrapper[33867]: I0219 03:40:16.296895 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ctljd" event={"ID":"5a336761-686b-44e6-b441-b76aebf36dba","Type":"ContainerStarted","Data":"b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60"} Feb 19 03:40:16.299991 master-0 kubenswrapper[33867]: I0219 03:40:16.299937 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-scqnr" event={"ID":"a5f772a6-d473-476d-bd72-4af600e017bf","Type":"ContainerStarted","Data":"6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e"} Feb 19 03:40:16.310282 master-0 kubenswrapper[33867]: I0219 03:40:16.310175 33867 generic.go:334] "Generic (PLEG): container finished" podID="c4535337-2a9c-4883-b1c4-f3b066d521e6" containerID="bcea34297c5df201a2ed94d6ba62e5fcaf5246b202f6e5fe5609379505454580" exitCode=0 Feb 19 03:40:16.310480 master-0 kubenswrapper[33867]: I0219 03:40:16.310310 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8sf9" event={"ID":"c4535337-2a9c-4883-b1c4-f3b066d521e6","Type":"ContainerDied","Data":"bcea34297c5df201a2ed94d6ba62e5fcaf5246b202f6e5fe5609379505454580"} Feb 19 03:40:16.310480 master-0 kubenswrapper[33867]: I0219 03:40:16.310352 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8sf9" event={"ID":"c4535337-2a9c-4883-b1c4-f3b066d521e6","Type":"ContainerStarted","Data":"9ddf6b3ede69963a2b83f073e4ef9ad2ca06baf2c4ca4848f9b736d024a3dd82"} Feb 19 03:40:16.312820 master-0 kubenswrapper[33867]: I0219 03:40:16.312776 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f8-account-create-update-r5x64" event={"ID":"815c75b3-ac10-40a0-8467-8a168c2ff550","Type":"ContainerStarted","Data":"c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137"} Feb 19 03:40:16.349694 master-0 kubenswrapper[33867]: I0219 03:40:16.348172 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dcdf-account-create-update-5j6ts" podStartSLOduration=2.348145099 podStartE2EDuration="2.348145099s" podCreationTimestamp="2026-02-19 03:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:16.340999447 +0000 UTC m=+1021.637670068" watchObservedRunningTime="2026-02-19 03:40:16.348145099 +0000 UTC m=+1021.644815700" Feb 19 03:40:17.328745 master-0 kubenswrapper[33867]: I0219 03:40:17.328669 33867 generic.go:334] "Generic (PLEG): container finished" podID="a5f772a6-d473-476d-bd72-4af600e017bf" containerID="445eef0f01e829b411d52965f5d442419faf3eb0b6d103d75d5df3bde27ef6d3" exitCode=0 Feb 19 03:40:17.329377 master-0 kubenswrapper[33867]: I0219 03:40:17.328764 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-scqnr" event={"ID":"a5f772a6-d473-476d-bd72-4af600e017bf","Type":"ContainerDied","Data":"445eef0f01e829b411d52965f5d442419faf3eb0b6d103d75d5df3bde27ef6d3"} Feb 19 03:40:17.334065 master-0 kubenswrapper[33867]: I0219 03:40:17.334024 33867 generic.go:334] "Generic (PLEG): container finished" podID="815c75b3-ac10-40a0-8467-8a168c2ff550" containerID="347e5e08227fc9feb7ad5a2dcaa40fc017776f6646f40cdbcd278fcf9d499e5c" exitCode=0 Feb 19 03:40:17.334234 master-0 kubenswrapper[33867]: I0219 03:40:17.334168 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f8-account-create-update-r5x64" event={"ID":"815c75b3-ac10-40a0-8467-8a168c2ff550","Type":"ContainerDied","Data":"347e5e08227fc9feb7ad5a2dcaa40fc017776f6646f40cdbcd278fcf9d499e5c"} Feb 19 03:40:17.336382 master-0 kubenswrapper[33867]: I0219 03:40:17.336340 33867 generic.go:334] "Generic (PLEG): container finished" podID="a5af8566-34bb-49b8-821d-8b3c4d1aeb21" containerID="dccf318d2ba35240729b2ebea5a3fe06c080e75ca80ff6c38921fb581d6a2b20" exitCode=0 Feb 19 03:40:17.336475 master-0 kubenswrapper[33867]: I0219 03:40:17.336407 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dcdf-account-create-update-5j6ts" event={"ID":"a5af8566-34bb-49b8-821d-8b3c4d1aeb21","Type":"ContainerDied","Data":"dccf318d2ba35240729b2ebea5a3fe06c080e75ca80ff6c38921fb581d6a2b20"} Feb 19 03:40:17.858243 master-0 kubenswrapper[33867]: I0219 03:40:17.858163 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:17.951378 master-0 kubenswrapper[33867]: I0219 03:40:17.950389 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts\") pod \"c4535337-2a9c-4883-b1c4-f3b066d521e6\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " Feb 19 03:40:17.951378 master-0 kubenswrapper[33867]: I0219 03:40:17.950506 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htm2t\" (UniqueName: \"kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t\") pod \"c4535337-2a9c-4883-b1c4-f3b066d521e6\" (UID: \"c4535337-2a9c-4883-b1c4-f3b066d521e6\") " Feb 19 03:40:17.952163 master-0 kubenswrapper[33867]: I0219 03:40:17.951514 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4535337-2a9c-4883-b1c4-f3b066d521e6" (UID: "c4535337-2a9c-4883-b1c4-f3b066d521e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:17.955148 master-0 kubenswrapper[33867]: I0219 03:40:17.954896 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t" (OuterVolumeSpecName: "kube-api-access-htm2t") pod "c4535337-2a9c-4883-b1c4-f3b066d521e6" (UID: "c4535337-2a9c-4883-b1c4-f3b066d521e6"). InnerVolumeSpecName "kube-api-access-htm2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:18.060417 master-0 kubenswrapper[33867]: I0219 03:40:18.060321 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4535337-2a9c-4883-b1c4-f3b066d521e6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:18.060417 master-0 kubenswrapper[33867]: I0219 03:40:18.060403 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htm2t\" (UniqueName: \"kubernetes.io/projected/c4535337-2a9c-4883-b1c4-f3b066d521e6-kube-api-access-htm2t\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:18.355759 master-0 kubenswrapper[33867]: I0219 03:40:18.355561 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8sf9" event={"ID":"c4535337-2a9c-4883-b1c4-f3b066d521e6","Type":"ContainerDied","Data":"9ddf6b3ede69963a2b83f073e4ef9ad2ca06baf2c4ca4848f9b736d024a3dd82"} Feb 19 03:40:18.355759 master-0 kubenswrapper[33867]: I0219 03:40:18.355659 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ddf6b3ede69963a2b83f073e4ef9ad2ca06baf2c4ca4848f9b736d024a3dd82" Feb 19 03:40:18.355759 master-0 kubenswrapper[33867]: I0219 03:40:18.355756 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8sf9" Feb 19 03:40:20.290184 master-0 kubenswrapper[33867]: I0219 03:40:20.289604 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:20.453458 master-0 kubenswrapper[33867]: I0219 03:40:20.451134 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:40:20.453458 master-0 kubenswrapper[33867]: I0219 03:40:20.451481 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="dnsmasq-dns" containerID="cri-o://a036d8828b611a616689f4e8701e33bdeaf029edde132c682f779dcca94b52ee" gracePeriod=10 Feb 19 03:40:21.470779 master-0 kubenswrapper[33867]: I0219 03:40:21.470517 33867 generic.go:334] "Generic (PLEG): container finished" podID="59417827-7e90-4411-aec0-d15e031ea00b" containerID="a036d8828b611a616689f4e8701e33bdeaf029edde132c682f779dcca94b52ee" exitCode=0 Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.470569 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" event={"ID":"59417827-7e90-4411-aec0-d15e031ea00b","Type":"ContainerDied","Data":"a036d8828b611a616689f4e8701e33bdeaf029edde132c682f779dcca94b52ee"} Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.471136 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.475384 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dcdf-account-create-update-5j6ts" event={"ID":"a5af8566-34bb-49b8-821d-8b3c4d1aeb21","Type":"ContainerDied","Data":"236bd9e48aa0c4d365035ccd11f8de86913e743ace7feda68d6c96bfcb77e939"} Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.475425 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236bd9e48aa0c4d365035ccd11f8de86913e743ace7feda68d6c96bfcb77e939" Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.477841 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-scqnr" event={"ID":"a5f772a6-d473-476d-bd72-4af600e017bf","Type":"ContainerDied","Data":"6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e"} Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.477862 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6af6f665c7646ffe91991d663e75ffebdccc41e7ba1c64e3d6522b4a0c24f32e" Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.477909 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-scqnr" Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.481304 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f8-account-create-update-r5x64" event={"ID":"815c75b3-ac10-40a0-8467-8a168c2ff550","Type":"ContainerDied","Data":"c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137"} Feb 19 03:40:21.484061 master-0 kubenswrapper[33867]: I0219 03:40:21.481329 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93bb17a98e4a73557b6e0fd913a6a244a9f37f4e81a29eab53bc2a7feef7137" Feb 19 03:40:21.521068 master-0 kubenswrapper[33867]: I0219 03:40:21.517827 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:21.543108 master-0 kubenswrapper[33867]: I0219 03:40:21.542427 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:21.572288 master-0 kubenswrapper[33867]: I0219 03:40:21.572210 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktqsw\" (UniqueName: \"kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw\") pod \"a5f772a6-d473-476d-bd72-4af600e017bf\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " Feb 19 03:40:21.572755 master-0 kubenswrapper[33867]: I0219 03:40:21.572731 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts\") pod \"a5f772a6-d473-476d-bd72-4af600e017bf\" (UID: \"a5f772a6-d473-476d-bd72-4af600e017bf\") " Feb 19 03:40:21.573046 master-0 kubenswrapper[33867]: I0219 03:40:21.573028 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts\") pod \"815c75b3-ac10-40a0-8467-8a168c2ff550\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " Feb 19 03:40:21.573336 master-0 kubenswrapper[33867]: I0219 03:40:21.573315 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96s4\" (UniqueName: \"kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4\") pod \"815c75b3-ac10-40a0-8467-8a168c2ff550\" (UID: \"815c75b3-ac10-40a0-8467-8a168c2ff550\") " Feb 19 03:40:21.578717 master-0 kubenswrapper[33867]: I0219 03:40:21.575812 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5f772a6-d473-476d-bd72-4af600e017bf" (UID: "a5f772a6-d473-476d-bd72-4af600e017bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.579078 master-0 kubenswrapper[33867]: I0219 03:40:21.579031 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw" (OuterVolumeSpecName: "kube-api-access-ktqsw") pod "a5f772a6-d473-476d-bd72-4af600e017bf" (UID: "a5f772a6-d473-476d-bd72-4af600e017bf"). InnerVolumeSpecName "kube-api-access-ktqsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:21.580359 master-0 kubenswrapper[33867]: I0219 03:40:21.579681 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktqsw\" (UniqueName: \"kubernetes.io/projected/a5f772a6-d473-476d-bd72-4af600e017bf-kube-api-access-ktqsw\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.580359 master-0 kubenswrapper[33867]: I0219 03:40:21.579740 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f772a6-d473-476d-bd72-4af600e017bf-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.580359 master-0 kubenswrapper[33867]: I0219 03:40:21.580122 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "815c75b3-ac10-40a0-8467-8a168c2ff550" (UID: "815c75b3-ac10-40a0-8467-8a168c2ff550"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.582402 master-0 kubenswrapper[33867]: I0219 03:40:21.582351 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4" (OuterVolumeSpecName: "kube-api-access-m96s4") pod "815c75b3-ac10-40a0-8467-8a168c2ff550" (UID: "815c75b3-ac10-40a0-8467-8a168c2ff550"). InnerVolumeSpecName "kube-api-access-m96s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:21.623953 master-0 kubenswrapper[33867]: I0219 03:40:21.623858 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.688486 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khdqq\" (UniqueName: \"kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.688561 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts\") pod \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.688722 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.688906 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689031 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689082 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689155 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmx6\" (UniqueName: \"kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6\") pod \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\" (UID: \"a5af8566-34bb-49b8-821d-8b3c4d1aeb21\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689190 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0\") pod \"59417827-7e90-4411-aec0-d15e031ea00b\" (UID: \"59417827-7e90-4411-aec0-d15e031ea00b\") " Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689808 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815c75b3-ac10-40a0-8467-8a168c2ff550-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.690366 master-0 kubenswrapper[33867]: I0219 03:40:21.689827 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96s4\" (UniqueName: \"kubernetes.io/projected/815c75b3-ac10-40a0-8467-8a168c2ff550-kube-api-access-m96s4\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.695196 master-0 kubenswrapper[33867]: I0219 03:40:21.692197 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5af8566-34bb-49b8-821d-8b3c4d1aeb21" (UID: "a5af8566-34bb-49b8-821d-8b3c4d1aeb21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.725873 master-0 kubenswrapper[33867]: I0219 03:40:21.725461 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6" (OuterVolumeSpecName: "kube-api-access-jwmx6") pod "a5af8566-34bb-49b8-821d-8b3c4d1aeb21" (UID: "a5af8566-34bb-49b8-821d-8b3c4d1aeb21"). InnerVolumeSpecName "kube-api-access-jwmx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:21.729165 master-0 kubenswrapper[33867]: I0219 03:40:21.726743 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq" (OuterVolumeSpecName: "kube-api-access-khdqq") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "kube-api-access-khdqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:21.763166 master-0 kubenswrapper[33867]: I0219 03:40:21.763099 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.775906 master-0 kubenswrapper[33867]: I0219 03:40:21.775806 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config" (OuterVolumeSpecName: "config") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.791312 master-0 kubenswrapper[33867]: I0219 03:40:21.791253 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.791571 master-0 kubenswrapper[33867]: I0219 03:40:21.791555 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwmx6\" (UniqueName: \"kubernetes.io/projected/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-kube-api-access-jwmx6\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.791644 master-0 kubenswrapper[33867]: I0219 03:40:21.791633 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khdqq\" (UniqueName: \"kubernetes.io/projected/59417827-7e90-4411-aec0-d15e031ea00b-kube-api-access-khdqq\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.791714 master-0 kubenswrapper[33867]: I0219 03:40:21.791704 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5af8566-34bb-49b8-821d-8b3c4d1aeb21-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.791776 master-0 kubenswrapper[33867]: I0219 03:40:21.791766 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.797498 master-0 kubenswrapper[33867]: I0219 03:40:21.797149 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.810208 master-0 kubenswrapper[33867]: I0219 03:40:21.809366 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.818695 master-0 kubenswrapper[33867]: I0219 03:40:21.818047 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59417827-7e90-4411-aec0-d15e031ea00b" (UID: "59417827-7e90-4411-aec0-d15e031ea00b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:21.893826 master-0 kubenswrapper[33867]: I0219 03:40:21.893758 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.893826 master-0 kubenswrapper[33867]: I0219 03:40:21.893808 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:21.893826 master-0 kubenswrapper[33867]: I0219 03:40:21.893816 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59417827-7e90-4411-aec0-d15e031ea00b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:22.035151 master-0 kubenswrapper[33867]: E0219 03:40:22.035019 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f772a6_d473_476d_bd72_4af600e017bf.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:40:22.496603 master-0 kubenswrapper[33867]: I0219 03:40:22.496525 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" event={"ID":"59417827-7e90-4411-aec0-d15e031ea00b","Type":"ContainerDied","Data":"95719529cfe4f3e191d7f7d4acdde41b3cff425d31bcfeb0a4b5e9e3d786355a"} Feb 19 03:40:22.497176 master-0 kubenswrapper[33867]: I0219 03:40:22.496622 33867 scope.go:117] "RemoveContainer" containerID="a036d8828b611a616689f4e8701e33bdeaf029edde132c682f779dcca94b52ee" Feb 19 03:40:22.497176 master-0 kubenswrapper[33867]: I0219 03:40:22.496813 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d675d55f5-6zr5n" Feb 19 03:40:22.503234 master-0 kubenswrapper[33867]: I0219 03:40:22.503175 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dcdf-account-create-update-5j6ts" Feb 19 03:40:22.505382 master-0 kubenswrapper[33867]: I0219 03:40:22.505317 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ctljd" event={"ID":"5a336761-686b-44e6-b441-b76aebf36dba","Type":"ContainerStarted","Data":"59978e481f8873b5dac7b2a92084e0ba9f3ec221397112dc70dc271d4b647d2c"} Feb 19 03:40:22.505611 master-0 kubenswrapper[33867]: I0219 03:40:22.505513 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f8-account-create-update-r5x64" Feb 19 03:40:22.534997 master-0 kubenswrapper[33867]: I0219 03:40:22.534444 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ctljd" podStartSLOduration=3.442126894 podStartE2EDuration="8.534404798s" podCreationTimestamp="2026-02-19 03:40:14 +0000 UTC" firstStartedPulling="2026-02-19 03:40:16.191386781 +0000 UTC m=+1021.488057382" lastFinishedPulling="2026-02-19 03:40:21.283664675 +0000 UTC m=+1026.580335286" observedRunningTime="2026-02-19 03:40:22.524475457 +0000 UTC m=+1027.821146068" watchObservedRunningTime="2026-02-19 03:40:22.534404798 +0000 UTC m=+1027.831075409" Feb 19 03:40:22.558009 master-0 kubenswrapper[33867]: I0219 03:40:22.557565 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:40:22.566382 master-0 kubenswrapper[33867]: I0219 03:40:22.564381 33867 scope.go:117] "RemoveContainer" containerID="f2a6852e16f3a6c977ee1b8d34d089e9c09c4cfb2caf517ec78b86baa9d65c13" Feb 19 03:40:22.569480 master-0 kubenswrapper[33867]: I0219 03:40:22.568568 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d675d55f5-6zr5n"] Feb 19 03:40:22.969273 master-0 kubenswrapper[33867]: I0219 03:40:22.969195 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59417827-7e90-4411-aec0-d15e031ea00b" path="/var/lib/kubelet/pods/59417827-7e90-4411-aec0-d15e031ea00b/volumes" Feb 19 03:40:26.556490 master-0 kubenswrapper[33867]: I0219 03:40:26.556420 33867 generic.go:334] "Generic (PLEG): container finished" podID="5a336761-686b-44e6-b441-b76aebf36dba" containerID="59978e481f8873b5dac7b2a92084e0ba9f3ec221397112dc70dc271d4b647d2c" exitCode=0 Feb 19 03:40:26.556490 master-0 kubenswrapper[33867]: I0219 03:40:26.556480 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ctljd" event={"ID":"5a336761-686b-44e6-b441-b76aebf36dba","Type":"ContainerDied","Data":"59978e481f8873b5dac7b2a92084e0ba9f3ec221397112dc70dc271d4b647d2c"} Feb 19 03:40:28.021708 master-0 kubenswrapper[33867]: I0219 03:40:28.021650 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:28.149978 master-0 kubenswrapper[33867]: I0219 03:40:28.149805 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jqw\" (UniqueName: \"kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw\") pod \"5a336761-686b-44e6-b441-b76aebf36dba\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " Feb 19 03:40:28.149978 master-0 kubenswrapper[33867]: I0219 03:40:28.149977 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data\") pod \"5a336761-686b-44e6-b441-b76aebf36dba\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " Feb 19 03:40:28.150478 master-0 kubenswrapper[33867]: I0219 03:40:28.150249 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle\") pod \"5a336761-686b-44e6-b441-b76aebf36dba\" (UID: \"5a336761-686b-44e6-b441-b76aebf36dba\") " Feb 19 03:40:28.154652 master-0 kubenswrapper[33867]: I0219 03:40:28.154595 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw" (OuterVolumeSpecName: "kube-api-access-p2jqw") pod "5a336761-686b-44e6-b441-b76aebf36dba" (UID: "5a336761-686b-44e6-b441-b76aebf36dba"). InnerVolumeSpecName "kube-api-access-p2jqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:28.183076 master-0 kubenswrapper[33867]: I0219 03:40:28.182909 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a336761-686b-44e6-b441-b76aebf36dba" (UID: "5a336761-686b-44e6-b441-b76aebf36dba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:28.210814 master-0 kubenswrapper[33867]: I0219 03:40:28.210737 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data" (OuterVolumeSpecName: "config-data") pod "5a336761-686b-44e6-b441-b76aebf36dba" (UID: "5a336761-686b-44e6-b441-b76aebf36dba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:28.253870 master-0 kubenswrapper[33867]: I0219 03:40:28.253497 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:28.253870 master-0 kubenswrapper[33867]: I0219 03:40:28.253552 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jqw\" (UniqueName: \"kubernetes.io/projected/5a336761-686b-44e6-b441-b76aebf36dba-kube-api-access-p2jqw\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:28.253870 master-0 kubenswrapper[33867]: I0219 03:40:28.253625 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a336761-686b-44e6-b441-b76aebf36dba-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:28.581245 master-0 kubenswrapper[33867]: I0219 03:40:28.581096 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ctljd" event={"ID":"5a336761-686b-44e6-b441-b76aebf36dba","Type":"ContainerDied","Data":"b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60"} Feb 19 03:40:28.581245 master-0 kubenswrapper[33867]: I0219 03:40:28.581156 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95c6ab3511793d4d1c0217afebc836761021a6d65d52211e897d9ffd18e2c60" Feb 19 03:40:28.581245 master-0 kubenswrapper[33867]: I0219 03:40:28.581242 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ctljd" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.896887 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897524 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f772a6-d473-476d-bd72-4af600e017bf" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897541 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f772a6-d473-476d-bd72-4af600e017bf" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897565 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4535337-2a9c-4883-b1c4-f3b066d521e6" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897573 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4535337-2a9c-4883-b1c4-f3b066d521e6" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897592 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815c75b3-ac10-40a0-8467-8a168c2ff550" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897601 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="815c75b3-ac10-40a0-8467-8a168c2ff550" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897655 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="init" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897661 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="init" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897670 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5af8566-34bb-49b8-821d-8b3c4d1aeb21" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897676 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5af8566-34bb-49b8-821d-8b3c4d1aeb21" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897698 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a336761-686b-44e6-b441-b76aebf36dba" containerName="keystone-db-sync" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897704 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a336761-686b-44e6-b441-b76aebf36dba" containerName="keystone-db-sync" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: E0219 03:40:28.897717 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="dnsmasq-dns" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897726 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="dnsmasq-dns" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897944 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="815c75b3-ac10-40a0-8467-8a168c2ff550" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.897978 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a336761-686b-44e6-b441-b76aebf36dba" containerName="keystone-db-sync" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.898004 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5af8566-34bb-49b8-821d-8b3c4d1aeb21" containerName="mariadb-account-create-update" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.898037 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f772a6-d473-476d-bd72-4af600e017bf" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.898059 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="59417827-7e90-4411-aec0-d15e031ea00b" containerName="dnsmasq-dns" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.898076 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4535337-2a9c-4883-b1c4-f3b066d521e6" containerName="mariadb-database-create" Feb 19 03:40:28.906022 master-0 kubenswrapper[33867]: I0219 03:40:28.903791 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.978972 master-0 kubenswrapper[33867]: I0219 03:40:28.969974 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.979545 master-0 kubenswrapper[33867]: I0219 03:40:28.979441 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkmxm\" (UniqueName: \"kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.979859 master-0 kubenswrapper[33867]: I0219 03:40:28.979807 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.980123 master-0 kubenswrapper[33867]: I0219 03:40:28.980092 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.980526 master-0 kubenswrapper[33867]: I0219 03:40:28.980473 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:28.980807 master-0 kubenswrapper[33867]: I0219 03:40:28.980768 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.012276 master-0 kubenswrapper[33867]: I0219 03:40:29.012191 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:29.012276 master-0 kubenswrapper[33867]: I0219 03:40:29.012269 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rkkfp"] Feb 19 03:40:29.014456 master-0 kubenswrapper[33867]: I0219 03:40:29.013828 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rkkfp"] Feb 19 03:40:29.014456 master-0 kubenswrapper[33867]: I0219 03:40:29.014003 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.043747 master-0 kubenswrapper[33867]: I0219 03:40:29.043670 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 03:40:29.044400 master-0 kubenswrapper[33867]: I0219 03:40:29.043949 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 03:40:29.044400 master-0 kubenswrapper[33867]: I0219 03:40:29.044002 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 03:40:29.044400 master-0 kubenswrapper[33867]: I0219 03:40:29.044161 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 03:40:29.087028 master-0 kubenswrapper[33867]: I0219 03:40:29.086735 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.087239 master-0 kubenswrapper[33867]: I0219 03:40:29.087059 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.089453 master-0 kubenswrapper[33867]: I0219 03:40:29.089423 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkmxm\" (UniqueName: \"kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.089743 master-0 kubenswrapper[33867]: I0219 03:40:29.089718 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.089989 master-0 kubenswrapper[33867]: I0219 03:40:29.089834 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.089989 master-0 kubenswrapper[33867]: I0219 03:40:29.089932 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.089989 master-0 kubenswrapper[33867]: I0219 03:40:29.089980 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpxv\" (UniqueName: \"kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.090174 master-0 kubenswrapper[33867]: I0219 03:40:29.090100 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.090233 master-0 kubenswrapper[33867]: I0219 03:40:29.090215 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.090588 master-0 kubenswrapper[33867]: I0219 03:40:29.090562 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.090688 master-0 kubenswrapper[33867]: I0219 03:40:29.090675 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.090829 master-0 kubenswrapper[33867]: I0219 03:40:29.090816 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.090948 master-0 kubenswrapper[33867]: I0219 03:40:29.090934 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.093041 master-0 kubenswrapper[33867]: I0219 03:40:29.093021 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.093434 master-0 kubenswrapper[33867]: I0219 03:40:29.093408 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.094409 master-0 kubenswrapper[33867]: I0219 03:40:29.094377 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.096232 master-0 kubenswrapper[33867]: I0219 03:40:29.096209 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.112884 master-0 kubenswrapper[33867]: I0219 03:40:29.112851 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkmxm\" (UniqueName: \"kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm\") pod \"dnsmasq-dns-7b4b48f6d5-qmbtd\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.192997 master-0 kubenswrapper[33867]: I0219 03:40:29.189141 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-b7dmh"] Feb 19 03:40:29.193458 master-0 kubenswrapper[33867]: I0219 03:40:29.193400 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpxv\" (UniqueName: \"kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.193555 master-0 kubenswrapper[33867]: I0219 03:40:29.193526 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.193602 master-0 kubenswrapper[33867]: I0219 03:40:29.193561 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.193602 master-0 kubenswrapper[33867]: I0219 03:40:29.193587 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.193678 master-0 kubenswrapper[33867]: I0219 03:40:29.193619 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.193737 master-0 kubenswrapper[33867]: I0219 03:40:29.193712 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.202916 master-0 kubenswrapper[33867]: I0219 03:40:29.202821 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.207333 master-0 kubenswrapper[33867]: I0219 03:40:29.206599 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.219433 master-0 kubenswrapper[33867]: I0219 03:40:29.216212 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.226366 master-0 kubenswrapper[33867]: I0219 03:40:29.225586 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.231338 master-0 kubenswrapper[33867]: I0219 03:40:29.230037 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.231338 master-0 kubenswrapper[33867]: I0219 03:40:29.230904 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.249602 master-0 kubenswrapper[33867]: I0219 03:40:29.245506 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpxv\" (UniqueName: \"kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv\") pod \"keystone-bootstrap-rkkfp\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.272868 master-0 kubenswrapper[33867]: I0219 03:40:29.272562 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:29.284906 master-0 kubenswrapper[33867]: I0219 03:40:29.276099 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-b7dmh"] Feb 19 03:40:29.316592 master-0 kubenswrapper[33867]: I0219 03:40:29.314463 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hqk\" (UniqueName: \"kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.316592 master-0 kubenswrapper[33867]: I0219 03:40:29.314599 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.346251 master-0 kubenswrapper[33867]: I0219 03:40:29.330881 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-12f5-account-create-update-ch74c"] Feb 19 03:40:29.362500 master-0 kubenswrapper[33867]: I0219 03:40:29.362005 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.365323 master-0 kubenswrapper[33867]: I0219 03:40:29.365271 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 19 03:40:29.431539 master-0 kubenswrapper[33867]: I0219 03:40:29.429797 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:29.431539 master-0 kubenswrapper[33867]: I0219 03:40:29.429846 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-12f5-account-create-update-ch74c"] Feb 19 03:40:29.432476 master-0 kubenswrapper[33867]: I0219 03:40:29.432317 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.432476 master-0 kubenswrapper[33867]: I0219 03:40:29.432458 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmgx\" (UniqueName: \"kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.449042 master-0 kubenswrapper[33867]: I0219 03:40:29.432781 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.449042 master-0 kubenswrapper[33867]: I0219 03:40:29.434055 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.449042 master-0 kubenswrapper[33867]: I0219 03:40:29.434250 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hqk\" (UniqueName: \"kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.450672 master-0 kubenswrapper[33867]: I0219 03:40:29.449808 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-db-sync-hjrc5"] Feb 19 03:40:29.461545 master-0 kubenswrapper[33867]: I0219 03:40:29.461488 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.478954 master-0 kubenswrapper[33867]: I0219 03:40:29.478901 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-scripts" Feb 19 03:40:29.479504 master-0 kubenswrapper[33867]: I0219 03:40:29.479459 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-config-data" Feb 19 03:40:29.515964 master-0 kubenswrapper[33867]: I0219 03:40:29.513603 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hqk\" (UniqueName: \"kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk\") pod \"ironic-db-create-b7dmh\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.542643 master-0 kubenswrapper[33867]: I0219 03:40:29.542207 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-db-sync-hjrc5"] Feb 19 03:40:29.545390 master-0 kubenswrapper[33867]: I0219 03:40:29.540369 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.545390 master-0 kubenswrapper[33867]: I0219 03:40:29.544975 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmgx\" (UniqueName: \"kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.547224 master-0 kubenswrapper[33867]: I0219 03:40:29.545276 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.547224 master-0 kubenswrapper[33867]: I0219 03:40:29.546290 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528bb\" (UniqueName: \"kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.547224 master-0 kubenswrapper[33867]: I0219 03:40:29.546576 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.547224 master-0 kubenswrapper[33867]: I0219 03:40:29.547002 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.554495 master-0 kubenswrapper[33867]: I0219 03:40:29.554094 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.554495 master-0 kubenswrapper[33867]: I0219 03:40:29.554409 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.555284 master-0 kubenswrapper[33867]: I0219 03:40:29.555209 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.567557 master-0 kubenswrapper[33867]: I0219 03:40:29.563565 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cwnd9"] Feb 19 03:40:29.567557 master-0 kubenswrapper[33867]: I0219 03:40:29.565650 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.572217 master-0 kubenswrapper[33867]: I0219 03:40:29.571434 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 03:40:29.572217 master-0 kubenswrapper[33867]: I0219 03:40:29.571807 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 03:40:29.578513 master-0 kubenswrapper[33867]: I0219 03:40:29.578444 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cwnd9"] Feb 19 03:40:29.581665 master-0 kubenswrapper[33867]: I0219 03:40:29.581601 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmgx\" (UniqueName: \"kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx\") pod \"ironic-12f5-account-create-update-ch74c\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.620569 master-0 kubenswrapper[33867]: I0219 03:40:29.620504 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2fmpd"] Feb 19 03:40:29.630177 master-0 kubenswrapper[33867]: I0219 03:40:29.629946 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661131 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661314 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661345 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528bb\" (UniqueName: \"kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661393 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661437 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661472 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n9b\" (UniqueName: \"kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661497 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661514 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.664415 master-0 kubenswrapper[33867]: I0219 03:40:29.661559 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.668030 master-0 kubenswrapper[33867]: I0219 03:40:29.667965 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.672224 master-0 kubenswrapper[33867]: I0219 03:40:29.672065 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.677762 master-0 kubenswrapper[33867]: I0219 03:40:29.675623 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 03:40:29.677762 master-0 kubenswrapper[33867]: I0219 03:40:29.676347 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.683783 master-0 kubenswrapper[33867]: I0219 03:40:29.683430 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 03:40:29.683912 master-0 kubenswrapper[33867]: I0219 03:40:29.683869 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.687247 master-0 kubenswrapper[33867]: I0219 03:40:29.686850 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.696969 master-0 kubenswrapper[33867]: I0219 03:40:29.688380 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:29.713374 master-0 kubenswrapper[33867]: I0219 03:40:29.712763 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528bb\" (UniqueName: \"kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb\") pod \"cinder-054a4-db-sync-hjrc5\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.713374 master-0 kubenswrapper[33867]: I0219 03:40:29.712778 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2fmpd"] Feb 19 03:40:29.724600 master-0 kubenswrapper[33867]: I0219 03:40:29.724511 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:40:29.728499 master-0 kubenswrapper[33867]: I0219 03:40:29.727555 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.736941 master-0 kubenswrapper[33867]: I0219 03:40:29.735903 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:40:29.763324 master-0 kubenswrapper[33867]: I0219 03:40:29.763108 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.763324 master-0 kubenswrapper[33867]: I0219 03:40:29.763236 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n9b\" (UniqueName: \"kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763367 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763395 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763457 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763487 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763550 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.763592 master-0 kubenswrapper[33867]: I0219 03:40:29.763588 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqph\" (UniqueName: \"kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.768679 master-0 kubenswrapper[33867]: I0219 03:40:29.767860 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.768679 master-0 kubenswrapper[33867]: I0219 03:40:29.768186 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.818762 master-0 kubenswrapper[33867]: I0219 03:40:29.811186 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:29.831934 master-0 kubenswrapper[33867]: I0219 03:40:29.831860 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:29.845123 master-0 kubenswrapper[33867]: I0219 03:40:29.845030 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n9b\" (UniqueName: \"kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b\") pod \"neutron-db-sync-cwnd9\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:29.846321 master-0 kubenswrapper[33867]: I0219 03:40:29.846197 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:40:29.865526 master-0 kubenswrapper[33867]: I0219 03:40:29.865453 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865545 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865580 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865639 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865677 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865704 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865730 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865755 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.865777 master-0 kubenswrapper[33867]: I0219 03:40:29.865784 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsqph\" (UniqueName: \"kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.866141 master-0 kubenswrapper[33867]: I0219 03:40:29.865802 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklx5\" (UniqueName: \"kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.866141 master-0 kubenswrapper[33867]: I0219 03:40:29.865859 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:29.870451 master-0 kubenswrapper[33867]: I0219 03:40:29.867287 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.870451 master-0 kubenswrapper[33867]: I0219 03:40:29.870304 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.870734 master-0 kubenswrapper[33867]: I0219 03:40:29.870708 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.893168 master-0 kubenswrapper[33867]: I0219 03:40:29.893114 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.896792 master-0 kubenswrapper[33867]: I0219 03:40:29.896724 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsqph\" (UniqueName: \"kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph\") pod \"placement-db-sync-2fmpd\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:29.903095 master-0 kubenswrapper[33867]: I0219 03:40:29.902980 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:40:30.006802 master-0 kubenswrapper[33867]: I0219 03:40:30.006466 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.006802 master-0 kubenswrapper[33867]: I0219 03:40:30.006655 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wklx5\" (UniqueName: \"kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.006900 master-0 kubenswrapper[33867]: I0219 03:40:30.006829 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.007033 master-0 kubenswrapper[33867]: I0219 03:40:30.006932 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.007404 master-0 kubenswrapper[33867]: I0219 03:40:30.007380 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.007452 master-0 kubenswrapper[33867]: I0219 03:40:30.007443 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.009845 master-0 kubenswrapper[33867]: I0219 03:40:30.008293 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.010453 master-0 kubenswrapper[33867]: I0219 03:40:30.010388 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.010795 master-0 kubenswrapper[33867]: I0219 03:40:30.010770 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:30.011543 master-0 kubenswrapper[33867]: I0219 03:40:30.011518 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.012743 master-0 kubenswrapper[33867]: I0219 03:40:30.012717 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.013286 master-0 kubenswrapper[33867]: I0219 03:40:30.013265 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.020056 master-0 kubenswrapper[33867]: I0219 03:40:30.019981 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:30.038688 master-0 kubenswrapper[33867]: I0219 03:40:30.038021 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wklx5\" (UniqueName: \"kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5\") pod \"dnsmasq-dns-576bc499-6mdnt\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.068505 master-0 kubenswrapper[33867]: I0219 03:40:30.060506 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:30.191381 master-0 kubenswrapper[33867]: I0219 03:40:30.191328 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rkkfp"] Feb 19 03:40:30.473858 master-0 kubenswrapper[33867]: I0219 03:40:30.472439 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-b7dmh"] Feb 19 03:40:30.639411 master-0 kubenswrapper[33867]: I0219 03:40:30.638653 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rkkfp" event={"ID":"7bca7858-e242-46b5-870c-a48c10feaa1d","Type":"ContainerStarted","Data":"2811ba8d7b784dff321e81e099d0f0c015a1afe47e38694c73bd521fa0ab2a51"} Feb 19 03:40:30.648834 master-0 kubenswrapper[33867]: I0219 03:40:30.648753 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-b7dmh" event={"ID":"62650dfe-cc8e-4ee2-8926-d9a80610d90c","Type":"ContainerStarted","Data":"fbdabae4a9214f7738d0b2f8bd3ea8ec4c39013407a3255a21a321258ad2d98d"} Feb 19 03:40:30.660028 master-0 kubenswrapper[33867]: I0219 03:40:30.659956 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" event={"ID":"4624c637-15a7-4f3f-9fb8-ce6093235893","Type":"ContainerStarted","Data":"a11f9ba27d1699bb6c0a04cd19ad959bd7ecfd91c01232e3ca891514a9ac2b6c"} Feb 19 03:40:30.731854 master-0 kubenswrapper[33867]: I0219 03:40:30.730947 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-db-sync-hjrc5"] Feb 19 03:40:30.758654 master-0 kubenswrapper[33867]: W0219 03:40:30.757682 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c64d242_8a65_449e_b014_dc5fc42878e2.slice/crio-1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909 WatchSource:0}: Error finding container 1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909: Status 404 returned error can't find the container with id 1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909 Feb 19 03:40:31.002561 master-0 kubenswrapper[33867]: I0219 03:40:31.002497 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2fmpd"] Feb 19 03:40:31.100355 master-0 kubenswrapper[33867]: I0219 03:40:31.100112 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:31.107759 master-0 kubenswrapper[33867]: I0219 03:40:31.102699 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.128896 master-0 kubenswrapper[33867]: I0219 03:40:31.109494 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 03:40:31.128896 master-0 kubenswrapper[33867]: I0219 03:40:31.109760 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-default-external-config-data" Feb 19 03:40:31.128896 master-0 kubenswrapper[33867]: I0219 03:40:31.109899 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 03:40:31.232643 master-0 kubenswrapper[33867]: I0219 03:40:31.232569 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:31.245844 master-0 kubenswrapper[33867]: I0219 03:40:31.245781 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cwnd9"] Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256448 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256556 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256588 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256615 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256657 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256680 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfm6\" (UniqueName: \"kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256718 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.258796 master-0 kubenswrapper[33867]: I0219 03:40:31.256745 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.280856 master-0 kubenswrapper[33867]: I0219 03:40:31.278874 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-12f5-account-create-update-ch74c"] Feb 19 03:40:31.350832 master-0 kubenswrapper[33867]: I0219 03:40:31.350522 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:40:31.363218 master-0 kubenswrapper[33867]: I0219 03:40:31.363141 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363218 master-0 kubenswrapper[33867]: I0219 03:40:31.363220 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363687 master-0 kubenswrapper[33867]: I0219 03:40:31.363406 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363687 master-0 kubenswrapper[33867]: I0219 03:40:31.363510 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363687 master-0 kubenswrapper[33867]: I0219 03:40:31.363550 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfm6\" (UniqueName: \"kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363687 master-0 kubenswrapper[33867]: I0219 03:40:31.363618 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.363995 master-0 kubenswrapper[33867]: I0219 03:40:31.363911 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.364121 master-0 kubenswrapper[33867]: I0219 03:40:31.364086 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.364181 master-0 kubenswrapper[33867]: I0219 03:40:31.363903 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.364281 master-0 kubenswrapper[33867]: I0219 03:40:31.364227 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.366834 master-0 kubenswrapper[33867]: I0219 03:40:31.366730 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:40:31.367119 master-0 kubenswrapper[33867]: I0219 03:40:31.366835 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e63bed68a8422647d47a275f434bf5fb098e771165527c16915b4f4dc977b2c9/globalmount\"" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.368332 master-0 kubenswrapper[33867]: I0219 03:40:31.368012 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.370083 master-0 kubenswrapper[33867]: I0219 03:40:31.369963 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.376184 master-0 kubenswrapper[33867]: I0219 03:40:31.371803 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.376184 master-0 kubenswrapper[33867]: I0219 03:40:31.371884 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.389268 master-0 kubenswrapper[33867]: I0219 03:40:31.389169 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfm6\" (UniqueName: \"kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:31.699974 master-0 kubenswrapper[33867]: I0219 03:40:31.699592 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2fmpd" event={"ID":"8737f70a-6ee7-4124-a049-aefd62a7b446","Type":"ContainerStarted","Data":"ae321a931733c968982025375b5b5dac6a76e8f7450114ff482cd36d4775b051"} Feb 19 03:40:31.705338 master-0 kubenswrapper[33867]: I0219 03:40:31.705297 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-db-sync-hjrc5" event={"ID":"4c64d242-8a65-449e-b014-dc5fc42878e2","Type":"ContainerStarted","Data":"1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909"} Feb 19 03:40:31.713465 master-0 kubenswrapper[33867]: I0219 03:40:31.711487 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-12f5-account-create-update-ch74c" event={"ID":"98d74122-a24a-4d79-acd2-6071763c2d3e","Type":"ContainerStarted","Data":"bdc0afb9bfa2ca5fb826283db2cd7262c208127f9350529c50ad559a12dbc648"} Feb 19 03:40:31.713465 master-0 kubenswrapper[33867]: I0219 03:40:31.711517 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-12f5-account-create-update-ch74c" event={"ID":"98d74122-a24a-4d79-acd2-6071763c2d3e","Type":"ContainerStarted","Data":"a08fcd7af16273ac4869b3e287a28bf9e044b415338aa5fa208d598dae409277"} Feb 19 03:40:31.717412 master-0 kubenswrapper[33867]: I0219 03:40:31.717337 33867 generic.go:334] "Generic (PLEG): container finished" podID="62650dfe-cc8e-4ee2-8926-d9a80610d90c" containerID="afc1fd4f48865a81f7399bac483d4183b5e64fcaa74d0591fa04a876304b9931" exitCode=0 Feb 19 03:40:31.717577 master-0 kubenswrapper[33867]: I0219 03:40:31.717434 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-b7dmh" event={"ID":"62650dfe-cc8e-4ee2-8926-d9a80610d90c","Type":"ContainerDied","Data":"afc1fd4f48865a81f7399bac483d4183b5e64fcaa74d0591fa04a876304b9931"} Feb 19 03:40:31.735930 master-0 kubenswrapper[33867]: I0219 03:40:31.734446 33867 generic.go:334] "Generic (PLEG): container finished" podID="4624c637-15a7-4f3f-9fb8-ce6093235893" containerID="07a604911dc3ee4497e6946da93baa20708f02cc429e31af2969c3dc55f41439" exitCode=0 Feb 19 03:40:31.735930 master-0 kubenswrapper[33867]: I0219 03:40:31.734604 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" event={"ID":"4624c637-15a7-4f3f-9fb8-ce6093235893","Type":"ContainerDied","Data":"07a604911dc3ee4497e6946da93baa20708f02cc429e31af2969c3dc55f41439"} Feb 19 03:40:31.745040 master-0 kubenswrapper[33867]: I0219 03:40:31.744938 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576bc499-6mdnt" event={"ID":"d354f238-452a-4dd5-b466-5a88508156c7","Type":"ContainerStarted","Data":"242f64607bf3698acb086ba0ca2f896c1831ca4423f20d502093e4667c0c983d"} Feb 19 03:40:31.753318 master-0 kubenswrapper[33867]: I0219 03:40:31.749186 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:40:31.753318 master-0 kubenswrapper[33867]: I0219 03:40:31.752016 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.771883 master-0 kubenswrapper[33867]: I0219 03:40:31.764848 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-default-internal-config-data" Feb 19 03:40:31.773714 master-0 kubenswrapper[33867]: I0219 03:40:31.772445 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rkkfp" event={"ID":"7bca7858-e242-46b5-870c-a48c10feaa1d","Type":"ContainerStarted","Data":"96e63fb6a3a0517f7dc81e5e72756aa7a3d4b35a30f9008e95d266c5d42bc56f"} Feb 19 03:40:31.775060 master-0 kubenswrapper[33867]: I0219 03:40:31.775009 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 03:40:31.776396 master-0 kubenswrapper[33867]: I0219 03:40:31.776326 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cwnd9" event={"ID":"b067fa1c-719d-41db-a4be-d5d7d1125a67","Type":"ContainerStarted","Data":"594c5f165469392c88eb7980172d433721359d7c3dbbd427d70addd011d0c09f"} Feb 19 03:40:31.783469 master-0 kubenswrapper[33867]: I0219 03:40:31.779903 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:40:31.783469 master-0 kubenswrapper[33867]: I0219 03:40:31.783218 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-12f5-account-create-update-ch74c" podStartSLOduration=2.78317 podStartE2EDuration="2.78317s" podCreationTimestamp="2026-02-19 03:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:31.764521452 +0000 UTC m=+1037.061192083" watchObservedRunningTime="2026-02-19 03:40:31.78317 +0000 UTC m=+1037.079840611" Feb 19 03:40:31.817224 master-0 kubenswrapper[33867]: I0219 03:40:31.815973 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:31.817224 master-0 kubenswrapper[33867]: E0219 03:40:31.817104 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-fa7ca-default-external-api-0" podUID="b6b047c5-e692-460a-89d5-9b247c5b2555" Feb 19 03:40:31.880219 master-0 kubenswrapper[33867]: I0219 03:40:31.880137 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880219 master-0 kubenswrapper[33867]: I0219 03:40:31.880225 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn252\" (UniqueName: \"kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880587 master-0 kubenswrapper[33867]: I0219 03:40:31.880305 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880587 master-0 kubenswrapper[33867]: I0219 03:40:31.880324 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880587 master-0 kubenswrapper[33867]: I0219 03:40:31.880432 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880587 master-0 kubenswrapper[33867]: I0219 03:40:31.880541 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880587 master-0 kubenswrapper[33867]: I0219 03:40:31.880570 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.880825 master-0 kubenswrapper[33867]: I0219 03:40:31.880668 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.976360 master-0 kubenswrapper[33867]: I0219 03:40:31.973142 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cwnd9" podStartSLOduration=2.973114388 podStartE2EDuration="2.973114388s" podCreationTimestamp="2026-02-19 03:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:31.938468497 +0000 UTC m=+1037.235139108" watchObservedRunningTime="2026-02-19 03:40:31.973114388 +0000 UTC m=+1037.269784999" Feb 19 03:40:31.982959 master-0 kubenswrapper[33867]: I0219 03:40:31.982830 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.982967 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983036 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983064 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn252\" (UniqueName: \"kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983085 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983106 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983169 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.983339 master-0 kubenswrapper[33867]: I0219 03:40:31.983202 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.988608 master-0 kubenswrapper[33867]: I0219 03:40:31.987414 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.988608 master-0 kubenswrapper[33867]: I0219 03:40:31.987903 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.988608 master-0 kubenswrapper[33867]: I0219 03:40:31.988160 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.990978 master-0 kubenswrapper[33867]: I0219 03:40:31.990479 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:40:31.990978 master-0 kubenswrapper[33867]: I0219 03:40:31.990510 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f18d4c35e8710889152413040b4d09f48db19ab30f1052671a3cdb6b7bd3618f/globalmount\"" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.996322 master-0 kubenswrapper[33867]: I0219 03:40:31.996274 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.996744 master-0 kubenswrapper[33867]: I0219 03:40:31.996506 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.997188 master-0 kubenswrapper[33867]: I0219 03:40:31.997116 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:31.999679 master-0 kubenswrapper[33867]: I0219 03:40:31.999214 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rkkfp" podStartSLOduration=3.999200797 podStartE2EDuration="3.999200797s" podCreationTimestamp="2026-02-19 03:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:31.97706059 +0000 UTC m=+1037.273731201" watchObservedRunningTime="2026-02-19 03:40:31.999200797 +0000 UTC m=+1037.295871408" Feb 19 03:40:32.015041 master-0 kubenswrapper[33867]: I0219 03:40:32.014869 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn252\" (UniqueName: \"kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:32.502732 master-0 kubenswrapper[33867]: I0219 03:40:32.502658 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:32.531639 master-0 kubenswrapper[33867]: E0219 03:40:32.531568 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98d74122_a24a_4d79_acd2_6071763c2d3e.slice/crio-conmon-bdc0afb9bfa2ca5fb826283db2cd7262c208127f9350529c50ad559a12dbc648.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597388 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkmxm\" (UniqueName: \"kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597658 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597722 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597841 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597867 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.598367 master-0 kubenswrapper[33867]: I0219 03:40:32.597889 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb\") pod \"4624c637-15a7-4f3f-9fb8-ce6093235893\" (UID: \"4624c637-15a7-4f3f-9fb8-ce6093235893\") " Feb 19 03:40:32.607282 master-0 kubenswrapper[33867]: I0219 03:40:32.602969 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm" (OuterVolumeSpecName: "kube-api-access-pkmxm") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "kube-api-access-pkmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:32.638283 master-0 kubenswrapper[33867]: I0219 03:40:32.635125 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:32.638283 master-0 kubenswrapper[33867]: I0219 03:40:32.635693 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config" (OuterVolumeSpecName: "config") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:32.646584 master-0 kubenswrapper[33867]: I0219 03:40:32.643422 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:32.646584 master-0 kubenswrapper[33867]: I0219 03:40:32.646319 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:32.654309 master-0 kubenswrapper[33867]: I0219 03:40:32.653683 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4624c637-15a7-4f3f-9fb8-ce6093235893" (UID: "4624c637-15a7-4f3f-9fb8-ce6093235893"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.700928 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.700987 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.701002 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.701021 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkmxm\" (UniqueName: \"kubernetes.io/projected/4624c637-15a7-4f3f-9fb8-ce6093235893-kube-api-access-pkmxm\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.701033 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.703354 master-0 kubenswrapper[33867]: I0219 03:40:32.701044 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4624c637-15a7-4f3f-9fb8-ce6093235893-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:32.859393 master-0 kubenswrapper[33867]: I0219 03:40:32.858590 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cwnd9" event={"ID":"b067fa1c-719d-41db-a4be-d5d7d1125a67","Type":"ContainerStarted","Data":"b2ba44abc1386dc028a3c98d31fe9c8fe407e33d34bb426a05961ab500612f4d"} Feb 19 03:40:32.879316 master-0 kubenswrapper[33867]: I0219 03:40:32.878489 33867 generic.go:334] "Generic (PLEG): container finished" podID="98d74122-a24a-4d79-acd2-6071763c2d3e" containerID="bdc0afb9bfa2ca5fb826283db2cd7262c208127f9350529c50ad559a12dbc648" exitCode=0 Feb 19 03:40:32.879316 master-0 kubenswrapper[33867]: I0219 03:40:32.878668 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-12f5-account-create-update-ch74c" event={"ID":"98d74122-a24a-4d79-acd2-6071763c2d3e","Type":"ContainerDied","Data":"bdc0afb9bfa2ca5fb826283db2cd7262c208127f9350529c50ad559a12dbc648"} Feb 19 03:40:32.893318 master-0 kubenswrapper[33867]: I0219 03:40:32.892490 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" Feb 19 03:40:32.897317 master-0 kubenswrapper[33867]: I0219 03:40:32.894052 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b4b48f6d5-qmbtd" event={"ID":"4624c637-15a7-4f3f-9fb8-ce6093235893","Type":"ContainerDied","Data":"a11f9ba27d1699bb6c0a04cd19ad959bd7ecfd91c01232e3ca891514a9ac2b6c"} Feb 19 03:40:32.897317 master-0 kubenswrapper[33867]: I0219 03:40:32.894136 33867 scope.go:117] "RemoveContainer" containerID="07a604911dc3ee4497e6946da93baa20708f02cc429e31af2969c3dc55f41439" Feb 19 03:40:32.924356 master-0 kubenswrapper[33867]: I0219 03:40:32.924284 33867 generic.go:334] "Generic (PLEG): container finished" podID="d354f238-452a-4dd5-b466-5a88508156c7" containerID="d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862" exitCode=0 Feb 19 03:40:32.924598 master-0 kubenswrapper[33867]: I0219 03:40:32.924442 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:32.926324 master-0 kubenswrapper[33867]: I0219 03:40:32.925509 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576bc499-6mdnt" event={"ID":"d354f238-452a-4dd5-b466-5a88508156c7","Type":"ContainerDied","Data":"d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862"} Feb 19 03:40:32.984326 master-0 kubenswrapper[33867]: I0219 03:40:32.979089 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:33.049317 master-0 kubenswrapper[33867]: I0219 03:40:33.039413 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:33.162719 master-0 kubenswrapper[33867]: I0219 03:40:33.162589 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:33.202330 master-0 kubenswrapper[33867]: I0219 03:40:33.187647 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b4b48f6d5-qmbtd"] Feb 19 03:40:33.226074 master-0 kubenswrapper[33867]: I0219 03:40:33.225960 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226100 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdfm6\" (UniqueName: \"kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226225 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226331 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226415 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226465 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226627 master-0 kubenswrapper[33867]: I0219 03:40:33.226509 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.226821 master-0 kubenswrapper[33867]: I0219 03:40:33.226692 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"b6b047c5-e692-460a-89d5-9b247c5b2555\" (UID: \"b6b047c5-e692-460a-89d5-9b247c5b2555\") " Feb 19 03:40:33.246718 master-0 kubenswrapper[33867]: I0219 03:40:33.245969 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:40:33.246718 master-0 kubenswrapper[33867]: I0219 03:40:33.246596 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs" (OuterVolumeSpecName: "logs") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:40:33.248413 master-0 kubenswrapper[33867]: I0219 03:40:33.247645 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:33.249526 master-0 kubenswrapper[33867]: I0219 03:40:33.249458 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6" (OuterVolumeSpecName: "kube-api-access-zdfm6") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "kube-api-access-zdfm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:33.257976 master-0 kubenswrapper[33867]: I0219 03:40:33.255598 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts" (OuterVolumeSpecName: "scripts") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:33.257976 master-0 kubenswrapper[33867]: I0219 03:40:33.256381 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:33.257976 master-0 kubenswrapper[33867]: I0219 03:40:33.257396 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data" (OuterVolumeSpecName: "config-data") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331778 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331861 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331876 33867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331921 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331938 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdfm6\" (UniqueName: \"kubernetes.io/projected/b6b047c5-e692-460a-89d5-9b247c5b2555-kube-api-access-zdfm6\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331950 33867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6b047c5-e692-460a-89d5-9b247c5b2555-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.331873 master-0 kubenswrapper[33867]: I0219 03:40:33.331961 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b047c5-e692-460a-89d5-9b247c5b2555-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.632200 master-0 kubenswrapper[33867]: I0219 03:40:33.632106 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:33.743674 master-0 kubenswrapper[33867]: I0219 03:40:33.743521 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4hqk\" (UniqueName: \"kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk\") pod \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " Feb 19 03:40:33.744209 master-0 kubenswrapper[33867]: I0219 03:40:33.743955 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts\") pod \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\" (UID: \"62650dfe-cc8e-4ee2-8926-d9a80610d90c\") " Feb 19 03:40:33.746933 master-0 kubenswrapper[33867]: I0219 03:40:33.744781 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62650dfe-cc8e-4ee2-8926-d9a80610d90c" (UID: "62650dfe-cc8e-4ee2-8926-d9a80610d90c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:33.747050 master-0 kubenswrapper[33867]: I0219 03:40:33.746977 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk" (OuterVolumeSpecName: "kube-api-access-x4hqk") pod "62650dfe-cc8e-4ee2-8926-d9a80610d90c" (UID: "62650dfe-cc8e-4ee2-8926-d9a80610d90c"). InnerVolumeSpecName "kube-api-access-x4hqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:33.848460 master-0 kubenswrapper[33867]: I0219 03:40:33.848371 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4hqk\" (UniqueName: \"kubernetes.io/projected/62650dfe-cc8e-4ee2-8926-d9a80610d90c-kube-api-access-x4hqk\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.848460 master-0 kubenswrapper[33867]: I0219 03:40:33.848439 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62650dfe-cc8e-4ee2-8926-d9a80610d90c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:33.945702 master-0 kubenswrapper[33867]: I0219 03:40:33.945621 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576bc499-6mdnt" event={"ID":"d354f238-452a-4dd5-b466-5a88508156c7","Type":"ContainerStarted","Data":"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968"} Feb 19 03:40:33.945980 master-0 kubenswrapper[33867]: I0219 03:40:33.945837 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:33.951804 master-0 kubenswrapper[33867]: I0219 03:40:33.951736 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-b7dmh" event={"ID":"62650dfe-cc8e-4ee2-8926-d9a80610d90c","Type":"ContainerDied","Data":"fbdabae4a9214f7738d0b2f8bd3ea8ec4c39013407a3255a21a321258ad2d98d"} Feb 19 03:40:33.951865 master-0 kubenswrapper[33867]: I0219 03:40:33.951794 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdabae4a9214f7738d0b2f8bd3ea8ec4c39013407a3255a21a321258ad2d98d" Feb 19 03:40:33.951865 master-0 kubenswrapper[33867]: I0219 03:40:33.951821 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-b7dmh" Feb 19 03:40:33.952037 master-0 kubenswrapper[33867]: I0219 03:40:33.951837 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:34.621660 master-0 kubenswrapper[33867]: I0219 03:40:34.621485 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140" (OuterVolumeSpecName: "glance") pod "b6b047c5-e692-460a-89d5-9b247c5b2555" (UID: "b6b047c5-e692-460a-89d5-9b247c5b2555"). InnerVolumeSpecName "pvc-9b4cd943-1f61-4b27-8790-991add37bfec". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 03:40:34.629763 master-0 kubenswrapper[33867]: I0219 03:40:34.629719 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:34.688848 master-0 kubenswrapper[33867]: I0219 03:40:34.688784 33867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") on node \"master-0\" " Feb 19 03:40:34.698355 master-0 kubenswrapper[33867]: I0219 03:40:34.698152 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-576bc499-6mdnt" podStartSLOduration=5.698130245 podStartE2EDuration="5.698130245s" podCreationTimestamp="2026-02-19 03:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:34.101691957 +0000 UTC m=+1039.398362588" watchObservedRunningTime="2026-02-19 03:40:34.698130245 +0000 UTC m=+1039.994800856" Feb 19 03:40:34.727609 master-0 kubenswrapper[33867]: I0219 03:40:34.727572 33867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 03:40:34.728081 master-0 kubenswrapper[33867]: I0219 03:40:34.728067 33867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9b4cd943-1f61-4b27-8790-991add37bfec" (UniqueName: "kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140") on node "master-0" Feb 19 03:40:34.791737 master-0 kubenswrapper[33867]: I0219 03:40:34.791591 33867 reconciler_common.go:293] "Volume detached for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:34.865460 master-0 kubenswrapper[33867]: I0219 03:40:34.865354 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:34.997180 master-0 kubenswrapper[33867]: I0219 03:40:34.997094 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4624c637-15a7-4f3f-9fb8-ce6093235893" path="/var/lib/kubelet/pods/4624c637-15a7-4f3f-9fb8-ce6093235893/volumes" Feb 19 03:40:34.997902 master-0 kubenswrapper[33867]: I0219 03:40:34.997873 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:35.018586 master-0 kubenswrapper[33867]: I0219 03:40:35.018026 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:35.031734 master-0 kubenswrapper[33867]: I0219 03:40:35.031635 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:35.033110 master-0 kubenswrapper[33867]: E0219 03:40:35.033066 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62650dfe-cc8e-4ee2-8926-d9a80610d90c" containerName="mariadb-database-create" Feb 19 03:40:35.033110 master-0 kubenswrapper[33867]: I0219 03:40:35.033094 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="62650dfe-cc8e-4ee2-8926-d9a80610d90c" containerName="mariadb-database-create" Feb 19 03:40:35.033251 master-0 kubenswrapper[33867]: E0219 03:40:35.033140 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4624c637-15a7-4f3f-9fb8-ce6093235893" containerName="init" Feb 19 03:40:35.033251 master-0 kubenswrapper[33867]: I0219 03:40:35.033150 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4624c637-15a7-4f3f-9fb8-ce6093235893" containerName="init" Feb 19 03:40:35.033551 master-0 kubenswrapper[33867]: I0219 03:40:35.033478 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4624c637-15a7-4f3f-9fb8-ce6093235893" containerName="init" Feb 19 03:40:35.033551 master-0 kubenswrapper[33867]: I0219 03:40:35.033512 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="62650dfe-cc8e-4ee2-8926-d9a80610d90c" containerName="mariadb-database-create" Feb 19 03:40:35.035212 master-0 kubenswrapper[33867]: I0219 03:40:35.035149 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.039431 master-0 kubenswrapper[33867]: I0219 03:40:35.039383 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-default-external-config-data" Feb 19 03:40:35.039671 master-0 kubenswrapper[33867]: I0219 03:40:35.039652 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 03:40:35.039897 master-0 kubenswrapper[33867]: I0219 03:40:35.039806 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214154 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214261 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc798\" (UniqueName: \"kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214314 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214361 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214464 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214506 master-0 kubenswrapper[33867]: I0219 03:40:35.214518 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214915 master-0 kubenswrapper[33867]: I0219 03:40:35.214609 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.214915 master-0 kubenswrapper[33867]: I0219 03:40:35.214659 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.316657 master-0 kubenswrapper[33867]: I0219 03:40:35.316458 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.316657 master-0 kubenswrapper[33867]: I0219 03:40:35.316596 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.316657 master-0 kubenswrapper[33867]: I0219 03:40:35.316649 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc798\" (UniqueName: \"kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.317066 master-0 kubenswrapper[33867]: I0219 03:40:35.316682 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.317587 master-0 kubenswrapper[33867]: I0219 03:40:35.316741 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.317747 master-0 kubenswrapper[33867]: I0219 03:40:35.317709 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.317832 master-0 kubenswrapper[33867]: I0219 03:40:35.317765 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.317902 master-0 kubenswrapper[33867]: I0219 03:40:35.317872 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.320144 master-0 kubenswrapper[33867]: I0219 03:40:35.320105 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.320476 master-0 kubenswrapper[33867]: I0219 03:40:35.320451 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.321413 master-0 kubenswrapper[33867]: I0219 03:40:35.321360 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.321566 master-0 kubenswrapper[33867]: I0219 03:40:35.321450 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.322588 master-0 kubenswrapper[33867]: I0219 03:40:35.322550 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:40:35.322697 master-0 kubenswrapper[33867]: I0219 03:40:35.322590 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e63bed68a8422647d47a275f434bf5fb098e771165527c16915b4f4dc977b2c9/globalmount\"" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.327152 master-0 kubenswrapper[33867]: I0219 03:40:35.327095 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.328943 master-0 kubenswrapper[33867]: I0219 03:40:35.328748 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.338372 master-0 kubenswrapper[33867]: I0219 03:40:35.338323 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc798\" (UniqueName: \"kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:35.993955 master-0 kubenswrapper[33867]: I0219 03:40:35.993856 33867 generic.go:334] "Generic (PLEG): container finished" podID="7bca7858-e242-46b5-870c-a48c10feaa1d" containerID="96e63fb6a3a0517f7dc81e5e72756aa7a3d4b35a30f9008e95d266c5d42bc56f" exitCode=0 Feb 19 03:40:35.993955 master-0 kubenswrapper[33867]: I0219 03:40:35.993932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rkkfp" event={"ID":"7bca7858-e242-46b5-870c-a48c10feaa1d","Type":"ContainerDied","Data":"96e63fb6a3a0517f7dc81e5e72756aa7a3d4b35a30f9008e95d266c5d42bc56f"} Feb 19 03:40:36.641947 master-0 kubenswrapper[33867]: I0219 03:40:36.641713 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:36.721152 master-0 kubenswrapper[33867]: I0219 03:40:36.721076 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:36.773788 master-0 kubenswrapper[33867]: I0219 03:40:36.773706 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhmgx\" (UniqueName: \"kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx\") pod \"98d74122-a24a-4d79-acd2-6071763c2d3e\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " Feb 19 03:40:36.774059 master-0 kubenswrapper[33867]: I0219 03:40:36.774029 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts\") pod \"98d74122-a24a-4d79-acd2-6071763c2d3e\" (UID: \"98d74122-a24a-4d79-acd2-6071763c2d3e\") " Feb 19 03:40:36.774534 master-0 kubenswrapper[33867]: I0219 03:40:36.774437 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98d74122-a24a-4d79-acd2-6071763c2d3e" (UID: "98d74122-a24a-4d79-acd2-6071763c2d3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:36.777043 master-0 kubenswrapper[33867]: I0219 03:40:36.775851 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98d74122-a24a-4d79-acd2-6071763c2d3e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:36.779609 master-0 kubenswrapper[33867]: I0219 03:40:36.778913 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx" (OuterVolumeSpecName: "kube-api-access-bhmgx") pod "98d74122-a24a-4d79-acd2-6071763c2d3e" (UID: "98d74122-a24a-4d79-acd2-6071763c2d3e"). InnerVolumeSpecName "kube-api-access-bhmgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:36.868336 master-0 kubenswrapper[33867]: I0219 03:40:36.865858 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:36.877829 master-0 kubenswrapper[33867]: I0219 03:40:36.877773 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhmgx\" (UniqueName: \"kubernetes.io/projected/98d74122-a24a-4d79-acd2-6071763c2d3e-kube-api-access-bhmgx\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:36.973374 master-0 kubenswrapper[33867]: W0219 03:40:36.971700 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c70b7f1_846a_4be2_bdd1_9214e7e75866.slice/crio-0e19237dfb9bba5851a65e1becb3c2f9f1ef4af461b0e7c5ef95dbe0c3219e36 WatchSource:0}: Error finding container 0e19237dfb9bba5851a65e1becb3c2f9f1ef4af461b0e7c5ef95dbe0c3219e36: Status 404 returned error can't find the container with id 0e19237dfb9bba5851a65e1becb3c2f9f1ef4af461b0e7c5ef95dbe0c3219e36 Feb 19 03:40:36.979420 master-0 kubenswrapper[33867]: I0219 03:40:36.978886 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b047c5-e692-460a-89d5-9b247c5b2555" path="/var/lib/kubelet/pods/b6b047c5-e692-460a-89d5-9b247c5b2555/volumes" Feb 19 03:40:36.980253 master-0 kubenswrapper[33867]: I0219 03:40:36.979798 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:40:37.027234 master-0 kubenswrapper[33867]: I0219 03:40:37.027079 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerStarted","Data":"0e19237dfb9bba5851a65e1becb3c2f9f1ef4af461b0e7c5ef95dbe0c3219e36"} Feb 19 03:40:37.031710 master-0 kubenswrapper[33867]: I0219 03:40:37.031575 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2fmpd" event={"ID":"8737f70a-6ee7-4124-a049-aefd62a7b446","Type":"ContainerStarted","Data":"26d5e9d0505e933d1ecf14b6e568c00321787d92e14d5bb4510ed17cb6c57a1e"} Feb 19 03:40:37.035748 master-0 kubenswrapper[33867]: I0219 03:40:37.034571 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-12f5-account-create-update-ch74c" event={"ID":"98d74122-a24a-4d79-acd2-6071763c2d3e","Type":"ContainerDied","Data":"a08fcd7af16273ac4869b3e287a28bf9e044b415338aa5fa208d598dae409277"} Feb 19 03:40:37.035748 master-0 kubenswrapper[33867]: I0219 03:40:37.034618 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08fcd7af16273ac4869b3e287a28bf9e044b415338aa5fa208d598dae409277" Feb 19 03:40:37.035748 master-0 kubenswrapper[33867]: I0219 03:40:37.035275 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-12f5-account-create-update-ch74c" Feb 19 03:40:37.084487 master-0 kubenswrapper[33867]: I0219 03:40:37.083602 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2fmpd" podStartSLOduration=2.681343447 podStartE2EDuration="8.083577867s" podCreationTimestamp="2026-02-19 03:40:29 +0000 UTC" firstStartedPulling="2026-02-19 03:40:31.011926133 +0000 UTC m=+1036.308596744" lastFinishedPulling="2026-02-19 03:40:36.414160553 +0000 UTC m=+1041.710831164" observedRunningTime="2026-02-19 03:40:37.058699002 +0000 UTC m=+1042.355369613" watchObservedRunningTime="2026-02-19 03:40:37.083577867 +0000 UTC m=+1042.380248478" Feb 19 03:40:37.980384 master-0 kubenswrapper[33867]: W0219 03:40:37.980328 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb19a1327_29e6_4354_bf31_ce295f5d758f.slice/crio-43682dd16568077d11bf64a1f627717bf368aad3364940e20a5fa43ac8a3d580 WatchSource:0}: Error finding container 43682dd16568077d11bf64a1f627717bf368aad3364940e20a5fa43ac8a3d580: Status 404 returned error can't find the container with id 43682dd16568077d11bf64a1f627717bf368aad3364940e20a5fa43ac8a3d580 Feb 19 03:40:38.004704 master-0 kubenswrapper[33867]: I0219 03:40:38.004560 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:40:38.056484 master-0 kubenswrapper[33867]: I0219 03:40:38.056416 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerStarted","Data":"43682dd16568077d11bf64a1f627717bf368aad3364940e20a5fa43ac8a3d580"} Feb 19 03:40:38.061826 master-0 kubenswrapper[33867]: I0219 03:40:38.061770 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rkkfp" event={"ID":"7bca7858-e242-46b5-870c-a48c10feaa1d","Type":"ContainerDied","Data":"2811ba8d7b784dff321e81e099d0f0c015a1afe47e38694c73bd521fa0ab2a51"} Feb 19 03:40:38.061826 master-0 kubenswrapper[33867]: I0219 03:40:38.061820 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2811ba8d7b784dff321e81e099d0f0c015a1afe47e38694c73bd521fa0ab2a51" Feb 19 03:40:38.062105 master-0 kubenswrapper[33867]: I0219 03:40:38.062066 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:38.228785 master-0 kubenswrapper[33867]: I0219 03:40:38.228724 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.229032 master-0 kubenswrapper[33867]: I0219 03:40:38.228822 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.229103 master-0 kubenswrapper[33867]: I0219 03:40:38.229062 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.229212 master-0 kubenswrapper[33867]: I0219 03:40:38.229185 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kpxv\" (UniqueName: \"kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.229292 master-0 kubenswrapper[33867]: I0219 03:40:38.229273 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.229350 master-0 kubenswrapper[33867]: I0219 03:40:38.229329 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data\") pod \"7bca7858-e242-46b5-870c-a48c10feaa1d\" (UID: \"7bca7858-e242-46b5-870c-a48c10feaa1d\") " Feb 19 03:40:38.232298 master-0 kubenswrapper[33867]: I0219 03:40:38.232244 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:38.234248 master-0 kubenswrapper[33867]: I0219 03:40:38.234190 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv" (OuterVolumeSpecName: "kube-api-access-2kpxv") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "kube-api-access-2kpxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:38.234604 master-0 kubenswrapper[33867]: I0219 03:40:38.234573 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts" (OuterVolumeSpecName: "scripts") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:38.236110 master-0 kubenswrapper[33867]: I0219 03:40:38.236015 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:38.273817 master-0 kubenswrapper[33867]: I0219 03:40:38.273710 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data" (OuterVolumeSpecName: "config-data") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:38.289413 master-0 kubenswrapper[33867]: I0219 03:40:38.289325 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bca7858-e242-46b5-870c-a48c10feaa1d" (UID: "7bca7858-e242-46b5-870c-a48c10feaa1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:38.331655 master-0 kubenswrapper[33867]: I0219 03:40:38.331592 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:38.331655 master-0 kubenswrapper[33867]: I0219 03:40:38.331644 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kpxv\" (UniqueName: \"kubernetes.io/projected/7bca7858-e242-46b5-870c-a48c10feaa1d-kube-api-access-2kpxv\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:38.331655 master-0 kubenswrapper[33867]: I0219 03:40:38.331662 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:38.331953 master-0 kubenswrapper[33867]: I0219 03:40:38.331674 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:38.331953 master-0 kubenswrapper[33867]: I0219 03:40:38.331685 33867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:38.331953 master-0 kubenswrapper[33867]: I0219 03:40:38.331696 33867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bca7858-e242-46b5-870c-a48c10feaa1d-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:39.082943 master-0 kubenswrapper[33867]: I0219 03:40:39.082835 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerStarted","Data":"98edb312abf6d88201dd07ab17b30f07f7783fb53186b8f810ba90ae532fdae1"} Feb 19 03:40:39.082943 master-0 kubenswrapper[33867]: I0219 03:40:39.082932 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerStarted","Data":"2563a7263a151820b358208de27903388222556115ee5cc370d1acb4f022dc27"} Feb 19 03:40:39.084772 master-0 kubenswrapper[33867]: I0219 03:40:39.084726 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rkkfp" Feb 19 03:40:39.085078 master-0 kubenswrapper[33867]: I0219 03:40:39.084971 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerStarted","Data":"1eabca20093373df83e4b5f361eef75ee9bfee8f6d5428d367dd26bfe64d8506"} Feb 19 03:40:39.124211 master-0 kubenswrapper[33867]: I0219 03:40:39.123530 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fa7ca-default-internal-api-0" podStartSLOduration=8.123511626 podStartE2EDuration="8.123511626s" podCreationTimestamp="2026-02-19 03:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:39.120031858 +0000 UTC m=+1044.416702489" watchObservedRunningTime="2026-02-19 03:40:39.123511626 +0000 UTC m=+1044.420182237" Feb 19 03:40:39.303746 master-0 kubenswrapper[33867]: I0219 03:40:39.303575 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rkkfp"] Feb 19 03:40:39.312968 master-0 kubenswrapper[33867]: I0219 03:40:39.312860 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rkkfp"] Feb 19 03:40:39.407716 master-0 kubenswrapper[33867]: I0219 03:40:39.407108 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-79nl9"] Feb 19 03:40:39.407997 master-0 kubenswrapper[33867]: E0219 03:40:39.407786 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bca7858-e242-46b5-870c-a48c10feaa1d" containerName="keystone-bootstrap" Feb 19 03:40:39.407997 master-0 kubenswrapper[33867]: I0219 03:40:39.407809 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bca7858-e242-46b5-870c-a48c10feaa1d" containerName="keystone-bootstrap" Feb 19 03:40:39.407997 master-0 kubenswrapper[33867]: E0219 03:40:39.407849 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d74122-a24a-4d79-acd2-6071763c2d3e" containerName="mariadb-account-create-update" Feb 19 03:40:39.407997 master-0 kubenswrapper[33867]: I0219 03:40:39.407858 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d74122-a24a-4d79-acd2-6071763c2d3e" containerName="mariadb-account-create-update" Feb 19 03:40:39.408198 master-0 kubenswrapper[33867]: I0219 03:40:39.408171 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d74122-a24a-4d79-acd2-6071763c2d3e" containerName="mariadb-account-create-update" Feb 19 03:40:39.408260 master-0 kubenswrapper[33867]: I0219 03:40:39.408234 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bca7858-e242-46b5-870c-a48c10feaa1d" containerName="keystone-bootstrap" Feb 19 03:40:39.409374 master-0 kubenswrapper[33867]: I0219 03:40:39.409336 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.413843 master-0 kubenswrapper[33867]: I0219 03:40:39.413786 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 03:40:39.414116 master-0 kubenswrapper[33867]: I0219 03:40:39.414103 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 03:40:39.414666 master-0 kubenswrapper[33867]: I0219 03:40:39.414289 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 03:40:39.414666 master-0 kubenswrapper[33867]: I0219 03:40:39.414459 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 03:40:39.421988 master-0 kubenswrapper[33867]: I0219 03:40:39.421904 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-79nl9"] Feb 19 03:40:39.493205 master-0 kubenswrapper[33867]: I0219 03:40:39.493133 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-lr9n7"] Feb 19 03:40:39.504359 master-0 kubenswrapper[33867]: I0219 03:40:39.504288 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.508824 master-0 kubenswrapper[33867]: I0219 03:40:39.508769 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 19 03:40:39.508942 master-0 kubenswrapper[33867]: I0219 03:40:39.508859 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 19 03:40:39.510921 master-0 kubenswrapper[33867]: I0219 03:40:39.510855 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-lr9n7"] Feb 19 03:40:39.569033 master-0 kubenswrapper[33867]: I0219 03:40:39.568853 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.569033 master-0 kubenswrapper[33867]: I0219 03:40:39.568939 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.569393 master-0 kubenswrapper[33867]: I0219 03:40:39.569329 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.569615 master-0 kubenswrapper[33867]: I0219 03:40:39.569439 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.569868 master-0 kubenswrapper[33867]: I0219 03:40:39.569829 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.570151 master-0 kubenswrapper[33867]: I0219 03:40:39.570088 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrdx\" (UniqueName: \"kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672401 master-0 kubenswrapper[33867]: I0219 03:40:39.672317 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672401 master-0 kubenswrapper[33867]: I0219 03:40:39.672391 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672432 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672451 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672498 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672537 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672563 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672610 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrdx\" (UniqueName: \"kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672641 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ts8z\" (UniqueName: \"kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672664 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672701 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.672847 master-0 kubenswrapper[33867]: I0219 03:40:39.672736 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.683192 master-0 kubenswrapper[33867]: I0219 03:40:39.683125 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.683364 master-0 kubenswrapper[33867]: I0219 03:40:39.683199 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.683364 master-0 kubenswrapper[33867]: I0219 03:40:39.683297 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.683479 master-0 kubenswrapper[33867]: I0219 03:40:39.683369 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.689072 master-0 kubenswrapper[33867]: I0219 03:40:39.687837 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.692388 master-0 kubenswrapper[33867]: I0219 03:40:39.692348 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrdx\" (UniqueName: \"kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx\") pod \"keystone-bootstrap-79nl9\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.741810 master-0 kubenswrapper[33867]: I0219 03:40:39.741727 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:39.774662 master-0 kubenswrapper[33867]: I0219 03:40:39.774582 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.774915 master-0 kubenswrapper[33867]: I0219 03:40:39.774729 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.774915 master-0 kubenswrapper[33867]: I0219 03:40:39.774771 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.774988 master-0 kubenswrapper[33867]: I0219 03:40:39.774923 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.775292 master-0 kubenswrapper[33867]: I0219 03:40:39.775227 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.775544 master-0 kubenswrapper[33867]: I0219 03:40:39.775514 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ts8z\" (UniqueName: \"kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.778901 master-0 kubenswrapper[33867]: I0219 03:40:39.778852 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.779028 master-0 kubenswrapper[33867]: I0219 03:40:39.778917 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.779182 master-0 kubenswrapper[33867]: I0219 03:40:39.779154 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.779472 master-0 kubenswrapper[33867]: I0219 03:40:39.779420 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.780325 master-0 kubenswrapper[33867]: I0219 03:40:39.780289 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.793685 master-0 kubenswrapper[33867]: I0219 03:40:39.793646 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ts8z\" (UniqueName: \"kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z\") pod \"ironic-db-sync-lr9n7\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:39.851742 master-0 kubenswrapper[33867]: I0219 03:40:39.851182 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:40:40.064386 master-0 kubenswrapper[33867]: I0219 03:40:40.064322 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:40:40.116407 master-0 kubenswrapper[33867]: I0219 03:40:40.116146 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerStarted","Data":"82132e2ca958fb96f24d639e7aeb7d9ac14df2a09348184346c53afe197cadad"} Feb 19 03:40:40.155880 master-0 kubenswrapper[33867]: I0219 03:40:40.154685 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:40.155880 master-0 kubenswrapper[33867]: I0219 03:40:40.154954 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" containerID="cri-o://30be8ded34fe08ac229762a1d55e716fcd25b02275e2331e3f6a9f4e5494377c" gracePeriod=10 Feb 19 03:40:40.211385 master-0 kubenswrapper[33867]: I0219 03:40:40.204914 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fa7ca-default-external-api-0" podStartSLOduration=6.204889505 podStartE2EDuration="6.204889505s" podCreationTimestamp="2026-02-19 03:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:40.171125789 +0000 UTC m=+1045.467796400" watchObservedRunningTime="2026-02-19 03:40:40.204889505 +0000 UTC m=+1045.501560116" Feb 19 03:40:40.253409 master-0 kubenswrapper[33867]: I0219 03:40:40.252818 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-79nl9"] Feb 19 03:40:40.298752 master-0 kubenswrapper[33867]: I0219 03:40:40.298435 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.204:5353: connect: connection refused" Feb 19 03:40:40.973469 master-0 kubenswrapper[33867]: I0219 03:40:40.973226 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bca7858-e242-46b5-870c-a48c10feaa1d" path="/var/lib/kubelet/pods/7bca7858-e242-46b5-870c-a48c10feaa1d/volumes" Feb 19 03:40:41.131698 master-0 kubenswrapper[33867]: I0219 03:40:41.131611 33867 generic.go:334] "Generic (PLEG): container finished" podID="8737f70a-6ee7-4124-a049-aefd62a7b446" containerID="26d5e9d0505e933d1ecf14b6e568c00321787d92e14d5bb4510ed17cb6c57a1e" exitCode=0 Feb 19 03:40:41.132798 master-0 kubenswrapper[33867]: I0219 03:40:41.131711 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2fmpd" event={"ID":"8737f70a-6ee7-4124-a049-aefd62a7b446","Type":"ContainerDied","Data":"26d5e9d0505e933d1ecf14b6e568c00321787d92e14d5bb4510ed17cb6c57a1e"} Feb 19 03:40:41.134229 master-0 kubenswrapper[33867]: I0219 03:40:41.134180 33867 generic.go:334] "Generic (PLEG): container finished" podID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerID="30be8ded34fe08ac229762a1d55e716fcd25b02275e2331e3f6a9f4e5494377c" exitCode=0 Feb 19 03:40:41.134436 master-0 kubenswrapper[33867]: I0219 03:40:41.134289 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" event={"ID":"4b16754f-37e1-41d0-842a-05b2360ea3f9","Type":"ContainerDied","Data":"30be8ded34fe08ac229762a1d55e716fcd25b02275e2331e3f6a9f4e5494377c"} Feb 19 03:40:44.866627 master-0 kubenswrapper[33867]: I0219 03:40:44.866560 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:44.866627 master-0 kubenswrapper[33867]: I0219 03:40:44.866635 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:44.910120 master-0 kubenswrapper[33867]: I0219 03:40:44.909460 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:44.922897 master-0 kubenswrapper[33867]: I0219 03:40:44.922488 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:45.193103 master-0 kubenswrapper[33867]: I0219 03:40:45.192875 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:45.193103 master-0 kubenswrapper[33867]: I0219 03:40:45.192952 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:45.289814 master-0 kubenswrapper[33867]: I0219 03:40:45.289723 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.204:5353: connect: connection refused" Feb 19 03:40:46.867646 master-0 kubenswrapper[33867]: I0219 03:40:46.867442 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:46.867646 master-0 kubenswrapper[33867]: I0219 03:40:46.867522 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:46.916710 master-0 kubenswrapper[33867]: I0219 03:40:46.916628 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:46.924416 master-0 kubenswrapper[33867]: I0219 03:40:46.924350 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:47.219563 master-0 kubenswrapper[33867]: I0219 03:40:47.219486 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:47.219563 master-0 kubenswrapper[33867]: I0219 03:40:47.219560 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:47.290396 master-0 kubenswrapper[33867]: I0219 03:40:47.288894 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:47.290396 master-0 kubenswrapper[33867]: I0219 03:40:47.289036 33867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:40:47.456305 master-0 kubenswrapper[33867]: I0219 03:40:47.455886 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:40:49.515696 master-0 kubenswrapper[33867]: I0219 03:40:49.515604 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:49.516382 master-0 kubenswrapper[33867]: I0219 03:40:49.515819 33867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:40:49.519274 master-0 kubenswrapper[33867]: I0219 03:40:49.519217 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:40:50.011996 master-0 kubenswrapper[33867]: W0219 03:40:50.011925 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cb720f5_9fcb_4763_b481_5feb7cc0d395.slice/crio-85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375 WatchSource:0}: Error finding container 85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375: Status 404 returned error can't find the container with id 85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375 Feb 19 03:40:50.277598 master-0 kubenswrapper[33867]: I0219 03:40:50.277356 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" event={"ID":"4b16754f-37e1-41d0-842a-05b2360ea3f9","Type":"ContainerDied","Data":"c0765fddd767e3f56f4825b2f95a6a7a4d9a76a7a3894cb6f5a6c355749c0a0c"} Feb 19 03:40:50.277598 master-0 kubenswrapper[33867]: I0219 03:40:50.277444 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0765fddd767e3f56f4825b2f95a6a7a4d9a76a7a3894cb6f5a6c355749c0a0c" Feb 19 03:40:50.283095 master-0 kubenswrapper[33867]: I0219 03:40:50.283048 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-79nl9" event={"ID":"5cb720f5-9fcb-4763-b481-5feb7cc0d395","Type":"ContainerStarted","Data":"85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375"} Feb 19 03:40:50.285923 master-0 kubenswrapper[33867]: I0219 03:40:50.285870 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2fmpd" event={"ID":"8737f70a-6ee7-4124-a049-aefd62a7b446","Type":"ContainerDied","Data":"ae321a931733c968982025375b5b5dac6a76e8f7450114ff482cd36d4775b051"} Feb 19 03:40:50.285923 master-0 kubenswrapper[33867]: I0219 03:40:50.285915 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae321a931733c968982025375b5b5dac6a76e8f7450114ff482cd36d4775b051" Feb 19 03:40:50.324225 master-0 kubenswrapper[33867]: I0219 03:40:50.324185 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:50.337601 master-0 kubenswrapper[33867]: I0219 03:40:50.337504 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:50.417971 master-0 kubenswrapper[33867]: I0219 03:40:50.417925 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts\") pod \"8737f70a-6ee7-4124-a049-aefd62a7b446\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418029 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data\") pod \"8737f70a-6ee7-4124-a049-aefd62a7b446\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418085 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle\") pod \"8737f70a-6ee7-4124-a049-aefd62a7b446\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418214 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418238 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418466 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsqph\" (UniqueName: \"kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph\") pod \"8737f70a-6ee7-4124-a049-aefd62a7b446\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " Feb 19 03:40:50.418223 master-0 kubenswrapper[33867]: I0219 03:40:50.418517 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418822 master-0 kubenswrapper[33867]: I0219 03:40:50.418596 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzr5v\" (UniqueName: \"kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418822 master-0 kubenswrapper[33867]: I0219 03:40:50.418614 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418822 master-0 kubenswrapper[33867]: I0219 03:40:50.418689 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config\") pod \"4b16754f-37e1-41d0-842a-05b2360ea3f9\" (UID: \"4b16754f-37e1-41d0-842a-05b2360ea3f9\") " Feb 19 03:40:50.418822 master-0 kubenswrapper[33867]: I0219 03:40:50.418729 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs\") pod \"8737f70a-6ee7-4124-a049-aefd62a7b446\" (UID: \"8737f70a-6ee7-4124-a049-aefd62a7b446\") " Feb 19 03:40:50.421526 master-0 kubenswrapper[33867]: I0219 03:40:50.420887 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs" (OuterVolumeSpecName: "logs") pod "8737f70a-6ee7-4124-a049-aefd62a7b446" (UID: "8737f70a-6ee7-4124-a049-aefd62a7b446"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:40:50.425697 master-0 kubenswrapper[33867]: I0219 03:40:50.425632 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts" (OuterVolumeSpecName: "scripts") pod "8737f70a-6ee7-4124-a049-aefd62a7b446" (UID: "8737f70a-6ee7-4124-a049-aefd62a7b446"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:50.449758 master-0 kubenswrapper[33867]: I0219 03:40:50.449672 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph" (OuterVolumeSpecName: "kube-api-access-rsqph") pod "8737f70a-6ee7-4124-a049-aefd62a7b446" (UID: "8737f70a-6ee7-4124-a049-aefd62a7b446"). InnerVolumeSpecName "kube-api-access-rsqph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:50.456639 master-0 kubenswrapper[33867]: I0219 03:40:50.456566 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v" (OuterVolumeSpecName: "kube-api-access-mzr5v") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "kube-api-access-mzr5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:50.471611 master-0 kubenswrapper[33867]: I0219 03:40:50.471518 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data" (OuterVolumeSpecName: "config-data") pod "8737f70a-6ee7-4124-a049-aefd62a7b446" (UID: "8737f70a-6ee7-4124-a049-aefd62a7b446"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:50.477039 master-0 kubenswrapper[33867]: I0219 03:40:50.476953 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8737f70a-6ee7-4124-a049-aefd62a7b446" (UID: "8737f70a-6ee7-4124-a049-aefd62a7b446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:50.521364 master-0 kubenswrapper[33867]: I0219 03:40:50.502065 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config" (OuterVolumeSpecName: "config") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:50.521364 master-0 kubenswrapper[33867]: I0219 03:40:50.508751 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:50.521364 master-0 kubenswrapper[33867]: I0219 03:40:50.513119 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:50.521364 master-0 kubenswrapper[33867]: I0219 03:40:50.514180 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:50.529563 master-0 kubenswrapper[33867]: I0219 03:40:50.529451 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8737f70a-6ee7-4124-a049-aefd62a7b446-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.529563 master-0 kubenswrapper[33867]: I0219 03:40:50.529516 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.529563 master-0 kubenswrapper[33867]: I0219 03:40:50.529531 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.529563 master-0 kubenswrapper[33867]: I0219 03:40:50.529546 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8737f70a-6ee7-4124-a049-aefd62a7b446-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.529563 master-0 kubenswrapper[33867]: I0219 03:40:50.529566 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.530064 master-0 kubenswrapper[33867]: I0219 03:40:50.529585 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.530064 master-0 kubenswrapper[33867]: I0219 03:40:50.529600 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsqph\" (UniqueName: \"kubernetes.io/projected/8737f70a-6ee7-4124-a049-aefd62a7b446-kube-api-access-rsqph\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.530064 master-0 kubenswrapper[33867]: I0219 03:40:50.529614 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzr5v\" (UniqueName: \"kubernetes.io/projected/4b16754f-37e1-41d0-842a-05b2360ea3f9-kube-api-access-mzr5v\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.530064 master-0 kubenswrapper[33867]: I0219 03:40:50.529629 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.530064 master-0 kubenswrapper[33867]: I0219 03:40:50.529642 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:50.533010 master-0 kubenswrapper[33867]: I0219 03:40:50.532963 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b16754f-37e1-41d0-842a-05b2360ea3f9" (UID: "4b16754f-37e1-41d0-842a-05b2360ea3f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:40:50.538771 master-0 kubenswrapper[33867]: W0219 03:40:50.538681 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ede5f4_a9ae_46ab_a72c_6575bb04274e.slice/crio-3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5 WatchSource:0}: Error finding container 3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5: Status 404 returned error can't find the container with id 3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5 Feb 19 03:40:50.543635 master-0 kubenswrapper[33867]: I0219 03:40:50.543533 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-lr9n7"] Feb 19 03:40:50.546507 master-0 kubenswrapper[33867]: I0219 03:40:50.546443 33867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 03:40:50.632323 master-0 kubenswrapper[33867]: I0219 03:40:50.632056 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b16754f-37e1-41d0-842a-05b2360ea3f9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:51.308365 master-0 kubenswrapper[33867]: I0219 03:40:51.308204 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-db-sync-hjrc5" event={"ID":"4c64d242-8a65-449e-b014-dc5fc42878e2","Type":"ContainerStarted","Data":"eaa1796402746dafcd60dfab5ccc98b8c155d7252eb59114005de4955ca53483"} Feb 19 03:40:51.312142 master-0 kubenswrapper[33867]: I0219 03:40:51.312071 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-lr9n7" event={"ID":"52ede5f4-a9ae-46ab-a72c-6575bb04274e","Type":"ContainerStarted","Data":"3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5"} Feb 19 03:40:51.314845 master-0 kubenswrapper[33867]: I0219 03:40:51.314772 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" Feb 19 03:40:51.315478 master-0 kubenswrapper[33867]: I0219 03:40:51.315435 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-79nl9" event={"ID":"5cb720f5-9fcb-4763-b481-5feb7cc0d395","Type":"ContainerStarted","Data":"ffe793697dc15f4876837918fb80731a301cbf5972feb45dc2376ea0bb9619c4"} Feb 19 03:40:51.315742 master-0 kubenswrapper[33867]: I0219 03:40:51.315677 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2fmpd" Feb 19 03:40:51.351013 master-0 kubenswrapper[33867]: I0219 03:40:51.350883 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-db-sync-hjrc5" podStartSLOduration=2.968962865 podStartE2EDuration="22.350850815s" podCreationTimestamp="2026-02-19 03:40:29 +0000 UTC" firstStartedPulling="2026-02-19 03:40:30.760229326 +0000 UTC m=+1036.056899937" lastFinishedPulling="2026-02-19 03:40:50.142117276 +0000 UTC m=+1055.438787887" observedRunningTime="2026-02-19 03:40:51.341783488 +0000 UTC m=+1056.638454109" watchObservedRunningTime="2026-02-19 03:40:51.350850815 +0000 UTC m=+1056.647521436" Feb 19 03:40:51.384813 master-0 kubenswrapper[33867]: I0219 03:40:51.381641 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-79nl9" podStartSLOduration=12.381616276 podStartE2EDuration="12.381616276s" podCreationTimestamp="2026-02-19 03:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:51.369706188 +0000 UTC m=+1056.666376799" watchObservedRunningTime="2026-02-19 03:40:51.381616276 +0000 UTC m=+1056.678286897" Feb 19 03:40:51.401484 master-0 kubenswrapper[33867]: I0219 03:40:51.401405 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:51.415434 master-0 kubenswrapper[33867]: I0219 03:40:51.415341 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9bb676bc9-rr48p"] Feb 19 03:40:51.555073 master-0 kubenswrapper[33867]: I0219 03:40:51.555007 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: E0219 03:40:51.555535 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: I0219 03:40:51.555565 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: E0219 03:40:51.555597 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8737f70a-6ee7-4124-a049-aefd62a7b446" containerName="placement-db-sync" Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: I0219 03:40:51.555607 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8737f70a-6ee7-4124-a049-aefd62a7b446" containerName="placement-db-sync" Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: E0219 03:40:51.555669 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="init" Feb 19 03:40:51.555753 master-0 kubenswrapper[33867]: I0219 03:40:51.555679 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="init" Feb 19 03:40:51.556007 master-0 kubenswrapper[33867]: I0219 03:40:51.555943 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8737f70a-6ee7-4124-a049-aefd62a7b446" containerName="placement-db-sync" Feb 19 03:40:51.556007 master-0 kubenswrapper[33867]: I0219 03:40:51.555972 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" Feb 19 03:40:51.558299 master-0 kubenswrapper[33867]: I0219 03:40:51.558246 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.560143 master-0 kubenswrapper[33867]: I0219 03:40:51.560052 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 03:40:51.564860 master-0 kubenswrapper[33867]: I0219 03:40:51.562282 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 03:40:51.564860 master-0 kubenswrapper[33867]: I0219 03:40:51.562432 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 03:40:51.564860 master-0 kubenswrapper[33867]: I0219 03:40:51.562534 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 03:40:51.589437 master-0 kubenswrapper[33867]: I0219 03:40:51.589360 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:40:51.659759 master-0 kubenswrapper[33867]: I0219 03:40:51.659667 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5qg\" (UniqueName: \"kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660023 master-0 kubenswrapper[33867]: I0219 03:40:51.659768 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660023 master-0 kubenswrapper[33867]: I0219 03:40:51.659888 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660023 master-0 kubenswrapper[33867]: I0219 03:40:51.659939 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660023 master-0 kubenswrapper[33867]: I0219 03:40:51.659998 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660199 master-0 kubenswrapper[33867]: I0219 03:40:51.660156 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.660199 master-0 kubenswrapper[33867]: I0219 03:40:51.660190 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.762247 master-0 kubenswrapper[33867]: I0219 03:40:51.762182 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5qg\" (UniqueName: \"kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.762247 master-0 kubenswrapper[33867]: I0219 03:40:51.762244 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.762626 master-0 kubenswrapper[33867]: I0219 03:40:51.762328 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.762837 master-0 kubenswrapper[33867]: I0219 03:40:51.762776 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.763042 master-0 kubenswrapper[33867]: I0219 03:40:51.763005 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.763524 master-0 kubenswrapper[33867]: I0219 03:40:51.763461 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.763601 master-0 kubenswrapper[33867]: I0219 03:40:51.763556 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.765337 master-0 kubenswrapper[33867]: I0219 03:40:51.764328 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.766784 master-0 kubenswrapper[33867]: I0219 03:40:51.766106 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.766784 master-0 kubenswrapper[33867]: I0219 03:40:51.766571 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.766784 master-0 kubenswrapper[33867]: I0219 03:40:51.766709 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.772762 master-0 kubenswrapper[33867]: I0219 03:40:51.767090 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.772762 master-0 kubenswrapper[33867]: I0219 03:40:51.767629 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.784549 master-0 kubenswrapper[33867]: I0219 03:40:51.784480 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5qg\" (UniqueName: \"kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg\") pod \"placement-854445f596-6p84s\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:51.878989 master-0 kubenswrapper[33867]: I0219 03:40:51.878929 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:52.346020 master-0 kubenswrapper[33867]: I0219 03:40:52.345872 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:40:52.365353 master-0 kubenswrapper[33867]: W0219 03:40:52.365291 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6551788d_2eaa_4ed9_ac31_7f5e9edccf42.slice/crio-d45dfa625371938ef7aa2fe76d908b9585173cac7afaea948065a4630f80e583 WatchSource:0}: Error finding container d45dfa625371938ef7aa2fe76d908b9585173cac7afaea948065a4630f80e583: Status 404 returned error can't find the container with id d45dfa625371938ef7aa2fe76d908b9585173cac7afaea948065a4630f80e583 Feb 19 03:40:52.971613 master-0 kubenswrapper[33867]: I0219 03:40:52.971524 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" path="/var/lib/kubelet/pods/4b16754f-37e1-41d0-842a-05b2360ea3f9/volumes" Feb 19 03:40:53.346512 master-0 kubenswrapper[33867]: I0219 03:40:53.346312 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerStarted","Data":"a15c4d58c92606ea1aafe4e9b79b0ceb640bf767a1473919f174d4180301e579"} Feb 19 03:40:53.346512 master-0 kubenswrapper[33867]: I0219 03:40:53.346390 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerStarted","Data":"fbde71e6414ff31ce5a67a30596b643ba032d9af6b93137b258f3cae7fe4b717"} Feb 19 03:40:53.346512 master-0 kubenswrapper[33867]: I0219 03:40:53.346403 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerStarted","Data":"d45dfa625371938ef7aa2fe76d908b9585173cac7afaea948065a4630f80e583"} Feb 19 03:40:53.346920 master-0 kubenswrapper[33867]: I0219 03:40:53.346743 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:53.409880 master-0 kubenswrapper[33867]: I0219 03:40:53.409753 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-854445f596-6p84s" podStartSLOduration=2.40972693 podStartE2EDuration="2.40972693s" podCreationTimestamp="2026-02-19 03:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:40:53.408009271 +0000 UTC m=+1058.704679882" watchObservedRunningTime="2026-02-19 03:40:53.40972693 +0000 UTC m=+1058.706397541" Feb 19 03:40:54.358645 master-0 kubenswrapper[33867]: I0219 03:40:54.358505 33867 generic.go:334] "Generic (PLEG): container finished" podID="5cb720f5-9fcb-4763-b481-5feb7cc0d395" containerID="ffe793697dc15f4876837918fb80731a301cbf5972feb45dc2376ea0bb9619c4" exitCode=0 Feb 19 03:40:54.358645 master-0 kubenswrapper[33867]: I0219 03:40:54.358587 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-79nl9" event={"ID":"5cb720f5-9fcb-4763-b481-5feb7cc0d395","Type":"ContainerDied","Data":"ffe793697dc15f4876837918fb80731a301cbf5972feb45dc2376ea0bb9619c4"} Feb 19 03:40:54.359279 master-0 kubenswrapper[33867]: I0219 03:40:54.358810 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-854445f596-6p84s" Feb 19 03:40:55.289612 master-0 kubenswrapper[33867]: I0219 03:40:55.289464 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9bb676bc9-rr48p" podUID="4b16754f-37e1-41d0-842a-05b2360ea3f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.204:5353: i/o timeout" Feb 19 03:40:57.403694 master-0 kubenswrapper[33867]: I0219 03:40:57.403630 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-79nl9" event={"ID":"5cb720f5-9fcb-4763-b481-5feb7cc0d395","Type":"ContainerDied","Data":"85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375"} Feb 19 03:40:57.403694 master-0 kubenswrapper[33867]: I0219 03:40:57.403690 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f11375a1e0d034f7e0964231b61acff37340cb05e24db06a8d9d90f9174375" Feb 19 03:40:57.523897 master-0 kubenswrapper[33867]: I0219 03:40:57.523839 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600142 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgrdx\" (UniqueName: \"kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600223 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600341 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600514 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600580 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.600639 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts\") pod \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\" (UID: \"5cb720f5-9fcb-4763-b481-5feb7cc0d395\") " Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.604927 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx" (OuterVolumeSpecName: "kube-api-access-pgrdx") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "kube-api-access-pgrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.605540 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts" (OuterVolumeSpecName: "scripts") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.605585 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:57.611668 master-0 kubenswrapper[33867]: I0219 03:40:57.606090 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:57.631411 master-0 kubenswrapper[33867]: I0219 03:40:57.631337 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:57.642578 master-0 kubenswrapper[33867]: I0219 03:40:57.642493 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data" (OuterVolumeSpecName: "config-data") pod "5cb720f5-9fcb-4763-b481-5feb7cc0d395" (UID: "5cb720f5-9fcb-4763-b481-5feb7cc0d395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:40:57.706032 master-0 kubenswrapper[33867]: I0219 03:40:57.705943 33867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:57.706032 master-0 kubenswrapper[33867]: I0219 03:40:57.706001 33867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:57.706032 master-0 kubenswrapper[33867]: I0219 03:40:57.706011 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:57.706032 master-0 kubenswrapper[33867]: I0219 03:40:57.706021 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgrdx\" (UniqueName: \"kubernetes.io/projected/5cb720f5-9fcb-4763-b481-5feb7cc0d395-kube-api-access-pgrdx\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:57.706032 master-0 kubenswrapper[33867]: I0219 03:40:57.706037 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:57.706477 master-0 kubenswrapper[33867]: I0219 03:40:57.706075 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb720f5-9fcb-4763-b481-5feb7cc0d395-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:40:58.419619 master-0 kubenswrapper[33867]: I0219 03:40:58.419538 33867 generic.go:334] "Generic (PLEG): container finished" podID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerID="02de6761ebd5c08cf3e8572c3f2af8d4010bdf840502a627e7c45b51ff211373" exitCode=0 Feb 19 03:40:58.420324 master-0 kubenswrapper[33867]: I0219 03:40:58.419629 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-lr9n7" event={"ID":"52ede5f4-a9ae-46ab-a72c-6575bb04274e","Type":"ContainerDied","Data":"02de6761ebd5c08cf3e8572c3f2af8d4010bdf840502a627e7c45b51ff211373"} Feb 19 03:40:58.420324 master-0 kubenswrapper[33867]: I0219 03:40:58.419677 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-79nl9" Feb 19 03:40:58.653464 master-0 kubenswrapper[33867]: I0219 03:40:58.652849 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-858d748b68-dmpbz"] Feb 19 03:40:58.653700 master-0 kubenswrapper[33867]: E0219 03:40:58.653550 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb720f5-9fcb-4763-b481-5feb7cc0d395" containerName="keystone-bootstrap" Feb 19 03:40:58.653700 master-0 kubenswrapper[33867]: I0219 03:40:58.653569 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb720f5-9fcb-4763-b481-5feb7cc0d395" containerName="keystone-bootstrap" Feb 19 03:40:58.654130 master-0 kubenswrapper[33867]: I0219 03:40:58.653833 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb720f5-9fcb-4763-b481-5feb7cc0d395" containerName="keystone-bootstrap" Feb 19 03:40:58.654944 master-0 kubenswrapper[33867]: I0219 03:40:58.654785 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.658535 master-0 kubenswrapper[33867]: I0219 03:40:58.657295 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 03:40:58.664493 master-0 kubenswrapper[33867]: I0219 03:40:58.662638 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 03:40:58.664493 master-0 kubenswrapper[33867]: I0219 03:40:58.662957 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 03:40:58.664493 master-0 kubenswrapper[33867]: I0219 03:40:58.663137 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 03:40:58.664493 master-0 kubenswrapper[33867]: I0219 03:40:58.663293 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 03:40:58.668286 master-0 kubenswrapper[33867]: I0219 03:40:58.666843 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-858d748b68-dmpbz"] Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732485 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-public-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732616 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-internal-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732665 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2bz\" (UniqueName: \"kubernetes.io/projected/26a4d640-b07f-4b27-91e2-bc4449a4213c-kube-api-access-wn2bz\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732760 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-scripts\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732826 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-credential-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732861 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-combined-ca-bundle\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.732988 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-fernet-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.742701 master-0 kubenswrapper[33867]: I0219 03:40:58.733032 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-config-data\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.835011 master-0 kubenswrapper[33867]: I0219 03:40:58.834927 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-config-data\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.835245 master-0 kubenswrapper[33867]: I0219 03:40:58.835061 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-public-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.835245 master-0 kubenswrapper[33867]: I0219 03:40:58.835104 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-internal-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.835245 master-0 kubenswrapper[33867]: I0219 03:40:58.835128 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2bz\" (UniqueName: \"kubernetes.io/projected/26a4d640-b07f-4b27-91e2-bc4449a4213c-kube-api-access-wn2bz\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.835877 master-0 kubenswrapper[33867]: I0219 03:40:58.835823 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-scripts\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.836001 master-0 kubenswrapper[33867]: I0219 03:40:58.835963 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-credential-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.836063 master-0 kubenswrapper[33867]: I0219 03:40:58.836000 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-combined-ca-bundle\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.836228 master-0 kubenswrapper[33867]: I0219 03:40:58.836131 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-fernet-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.838661 master-0 kubenswrapper[33867]: I0219 03:40:58.838630 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-public-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.839661 master-0 kubenswrapper[33867]: I0219 03:40:58.839628 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-credential-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.839661 master-0 kubenswrapper[33867]: I0219 03:40:58.839650 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-scripts\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.840950 master-0 kubenswrapper[33867]: I0219 03:40:58.840916 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-fernet-keys\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.842401 master-0 kubenswrapper[33867]: I0219 03:40:58.842346 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-internal-tls-certs\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.842847 master-0 kubenswrapper[33867]: I0219 03:40:58.842789 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-config-data\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.844844 master-0 kubenswrapper[33867]: I0219 03:40:58.844799 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a4d640-b07f-4b27-91e2-bc4449a4213c-combined-ca-bundle\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.852005 master-0 kubenswrapper[33867]: I0219 03:40:58.851955 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2bz\" (UniqueName: \"kubernetes.io/projected/26a4d640-b07f-4b27-91e2-bc4449a4213c-kube-api-access-wn2bz\") pod \"keystone-858d748b68-dmpbz\" (UID: \"26a4d640-b07f-4b27-91e2-bc4449a4213c\") " pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:58.985777 master-0 kubenswrapper[33867]: I0219 03:40:58.985519 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:40:59.435584 master-0 kubenswrapper[33867]: I0219 03:40:59.435517 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-lr9n7" event={"ID":"52ede5f4-a9ae-46ab-a72c-6575bb04274e","Type":"ContainerStarted","Data":"d4d68324cbf3d5d95dbb06b27c1427136717b42f247eef3684b268e8fc5d9241"} Feb 19 03:40:59.485664 master-0 kubenswrapper[33867]: I0219 03:40:59.485570 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-lr9n7" podStartSLOduration=13.650930071 podStartE2EDuration="20.485544128s" podCreationTimestamp="2026-02-19 03:40:39 +0000 UTC" firstStartedPulling="2026-02-19 03:40:50.546316992 +0000 UTC m=+1055.842987603" lastFinishedPulling="2026-02-19 03:40:57.380931049 +0000 UTC m=+1062.677601660" observedRunningTime="2026-02-19 03:40:59.47784462 +0000 UTC m=+1064.774515231" watchObservedRunningTime="2026-02-19 03:40:59.485544128 +0000 UTC m=+1064.782214739" Feb 19 03:40:59.538316 master-0 kubenswrapper[33867]: I0219 03:40:59.538234 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-858d748b68-dmpbz"] Feb 19 03:41:00.450423 master-0 kubenswrapper[33867]: I0219 03:41:00.450339 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-858d748b68-dmpbz" event={"ID":"26a4d640-b07f-4b27-91e2-bc4449a4213c","Type":"ContainerStarted","Data":"80e354d7a9ef219ccde0d330765801d8609d343b688cf0824437f128f9ea02f9"} Feb 19 03:41:00.451408 master-0 kubenswrapper[33867]: I0219 03:41:00.451380 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-858d748b68-dmpbz" event={"ID":"26a4d640-b07f-4b27-91e2-bc4449a4213c","Type":"ContainerStarted","Data":"b169c1f8e25c713f7ee4dd0a6b825fd6c3a261a72a09bb4ab83db8785f4b29dc"} Feb 19 03:41:00.451531 master-0 kubenswrapper[33867]: I0219 03:41:00.451513 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:41:00.452627 master-0 kubenswrapper[33867]: I0219 03:41:00.452593 33867 generic.go:334] "Generic (PLEG): container finished" podID="4c64d242-8a65-449e-b014-dc5fc42878e2" containerID="eaa1796402746dafcd60dfab5ccc98b8c155d7252eb59114005de4955ca53483" exitCode=0 Feb 19 03:41:00.452908 master-0 kubenswrapper[33867]: I0219 03:41:00.452693 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-db-sync-hjrc5" event={"ID":"4c64d242-8a65-449e-b014-dc5fc42878e2","Type":"ContainerDied","Data":"eaa1796402746dafcd60dfab5ccc98b8c155d7252eb59114005de4955ca53483"} Feb 19 03:41:00.476780 master-0 kubenswrapper[33867]: I0219 03:41:00.476612 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-858d748b68-dmpbz" podStartSLOduration=2.476581643 podStartE2EDuration="2.476581643s" podCreationTimestamp="2026-02-19 03:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:00.468969207 +0000 UTC m=+1065.765639828" watchObservedRunningTime="2026-02-19 03:41:00.476581643 +0000 UTC m=+1065.773252284" Feb 19 03:41:01.940056 master-0 kubenswrapper[33867]: I0219 03:41:01.940016 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:41:02.012955 master-0 kubenswrapper[33867]: I0219 03:41:02.012844 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528bb\" (UniqueName: \"kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.012955 master-0 kubenswrapper[33867]: I0219 03:41:02.012963 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.013231 master-0 kubenswrapper[33867]: I0219 03:41:02.013170 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.013276 master-0 kubenswrapper[33867]: I0219 03:41:02.013238 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.013339 master-0 kubenswrapper[33867]: I0219 03:41:02.013316 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.013413 master-0 kubenswrapper[33867]: I0219 03:41:02.013379 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data\") pod \"4c64d242-8a65-449e-b014-dc5fc42878e2\" (UID: \"4c64d242-8a65-449e-b014-dc5fc42878e2\") " Feb 19 03:41:02.013413 master-0 kubenswrapper[33867]: I0219 03:41:02.013402 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:02.014195 master-0 kubenswrapper[33867]: I0219 03:41:02.014167 33867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c64d242-8a65-449e-b014-dc5fc42878e2-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.018279 master-0 kubenswrapper[33867]: I0219 03:41:02.017727 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb" (OuterVolumeSpecName: "kube-api-access-528bb") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "kube-api-access-528bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:02.018279 master-0 kubenswrapper[33867]: I0219 03:41:02.017921 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:02.018279 master-0 kubenswrapper[33867]: I0219 03:41:02.018027 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts" (OuterVolumeSpecName: "scripts") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:02.046178 master-0 kubenswrapper[33867]: I0219 03:41:02.046132 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:02.070686 master-0 kubenswrapper[33867]: I0219 03:41:02.070626 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data" (OuterVolumeSpecName: "config-data") pod "4c64d242-8a65-449e-b014-dc5fc42878e2" (UID: "4c64d242-8a65-449e-b014-dc5fc42878e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:02.117096 master-0 kubenswrapper[33867]: I0219 03:41:02.116969 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.117096 master-0 kubenswrapper[33867]: I0219 03:41:02.117019 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.117096 master-0 kubenswrapper[33867]: I0219 03:41:02.117030 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.117096 master-0 kubenswrapper[33867]: I0219 03:41:02.117039 33867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c64d242-8a65-449e-b014-dc5fc42878e2-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.117096 master-0 kubenswrapper[33867]: I0219 03:41:02.117049 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-528bb\" (UniqueName: \"kubernetes.io/projected/4c64d242-8a65-449e-b014-dc5fc42878e2-kube-api-access-528bb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:02.480084 master-0 kubenswrapper[33867]: I0219 03:41:02.480029 33867 generic.go:334] "Generic (PLEG): container finished" podID="b067fa1c-719d-41db-a4be-d5d7d1125a67" containerID="b2ba44abc1386dc028a3c98d31fe9c8fe407e33d34bb426a05961ab500612f4d" exitCode=0 Feb 19 03:41:02.480388 master-0 kubenswrapper[33867]: I0219 03:41:02.480279 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cwnd9" event={"ID":"b067fa1c-719d-41db-a4be-d5d7d1125a67","Type":"ContainerDied","Data":"b2ba44abc1386dc028a3c98d31fe9c8fe407e33d34bb426a05961ab500612f4d"} Feb 19 03:41:02.483101 master-0 kubenswrapper[33867]: I0219 03:41:02.483070 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-db-sync-hjrc5" event={"ID":"4c64d242-8a65-449e-b014-dc5fc42878e2","Type":"ContainerDied","Data":"1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909"} Feb 19 03:41:02.483323 master-0 kubenswrapper[33867]: I0219 03:41:02.483304 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1abb0e3eef88cd70538b191cbcea8ff4b95fa99b5c6c9d010d39c5117ecd3909" Feb 19 03:41:02.483429 master-0 kubenswrapper[33867]: I0219 03:41:02.483275 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-db-sync-hjrc5" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.878894 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: E0219 03:41:02.879617 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c64d242-8a65-449e-b014-dc5fc42878e2" containerName="cinder-054a4-db-sync" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.879638 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c64d242-8a65-449e-b014-dc5fc42878e2" containerName="cinder-054a4-db-sync" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.879906 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c64d242-8a65-449e-b014-dc5fc42878e2" containerName="cinder-054a4-db-sync" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.882644 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.885454 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-scripts" Feb 19 03:41:02.886288 master-0 kubenswrapper[33867]: I0219 03:41:02.885809 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-scheduler-config-data" Feb 19 03:41:02.903046 master-0 kubenswrapper[33867]: I0219 03:41:02.887943 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-config-data" Feb 19 03:41:02.910865 master-0 kubenswrapper[33867]: I0219 03:41:02.910772 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:03.045378 master-0 kubenswrapper[33867]: I0219 03:41:03.045207 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:03.047972 master-0 kubenswrapper[33867]: I0219 03:41:03.047923 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.048144 master-0 kubenswrapper[33867]: I0219 03:41:03.048122 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.048460 master-0 kubenswrapper[33867]: I0219 03:41:03.048443 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwzcn\" (UniqueName: \"kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.048605 master-0 kubenswrapper[33867]: I0219 03:41:03.048591 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.048739 master-0 kubenswrapper[33867]: I0219 03:41:03.048726 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.049020 master-0 kubenswrapper[33867]: I0219 03:41:03.049001 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.049774 master-0 kubenswrapper[33867]: I0219 03:41:03.049229 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.062656 master-0 kubenswrapper[33867]: I0219 03:41:03.059373 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:03.062656 master-0 kubenswrapper[33867]: I0219 03:41:03.061979 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.068589 master-0 kubenswrapper[33867]: I0219 03:41:03.066336 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-volume-lvm-iscsi-config-data" Feb 19 03:41:03.080794 master-0 kubenswrapper[33867]: I0219 03:41:03.080723 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:03.119042 master-0 kubenswrapper[33867]: I0219 03:41:03.118983 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:03.146950 master-0 kubenswrapper[33867]: I0219 03:41:03.140915 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:03.146950 master-0 kubenswrapper[33867]: I0219 03:41:03.145017 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.151916 master-0 kubenswrapper[33867]: I0219 03:41:03.147673 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-backup-config-data" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155810 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnv9m\" (UniqueName: \"kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155871 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155897 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155939 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155955 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155974 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.155994 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156016 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156032 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156050 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156066 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156092 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvn6b\" (UniqueName: \"kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156160 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwzcn\" (UniqueName: \"kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156187 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156225 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156278 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156305 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156343 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156378 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156405 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156426 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156449 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156480 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156505 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156524 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.156523 master-0 kubenswrapper[33867]: I0219 03:41:03.156558 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.157545 master-0 kubenswrapper[33867]: I0219 03:41:03.156586 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.160245 master-0 kubenswrapper[33867]: I0219 03:41:03.160206 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.168524 master-0 kubenswrapper[33867]: I0219 03:41:03.166874 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.168524 master-0 kubenswrapper[33867]: I0219 03:41:03.167714 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.177718 master-0 kubenswrapper[33867]: I0219 03:41:03.168611 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.179005 master-0 kubenswrapper[33867]: I0219 03:41:03.178968 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.191321 master-0 kubenswrapper[33867]: I0219 03:41:03.189715 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:03.204056 master-0 kubenswrapper[33867]: I0219 03:41:03.203821 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwzcn\" (UniqueName: \"kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn\") pod \"cinder-054a4-scheduler-0\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.259169 master-0 kubenswrapper[33867]: I0219 03:41:03.258085 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.259169 master-0 kubenswrapper[33867]: I0219 03:41:03.259178 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.259592 master-0 kubenswrapper[33867]: I0219 03:41:03.259200 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260528 master-0 kubenswrapper[33867]: I0219 03:41:03.259060 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.260598 master-0 kubenswrapper[33867]: I0219 03:41:03.259466 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260598 master-0 kubenswrapper[33867]: I0219 03:41:03.260578 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260683 master-0 kubenswrapper[33867]: I0219 03:41:03.260661 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260726 master-0 kubenswrapper[33867]: I0219 03:41:03.260610 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.260726 master-0 kubenswrapper[33867]: I0219 03:41:03.260710 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260795 master-0 kubenswrapper[33867]: I0219 03:41:03.260757 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.260834 master-0 kubenswrapper[33867]: I0219 03:41:03.260786 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.260898 master-0 kubenswrapper[33867]: I0219 03:41:03.260871 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.260938 master-0 kubenswrapper[33867]: I0219 03:41:03.260896 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.261001 master-0 kubenswrapper[33867]: I0219 03:41:03.260984 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261095 master-0 kubenswrapper[33867]: I0219 03:41:03.261071 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261154 master-0 kubenswrapper[33867]: I0219 03:41:03.261104 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261154 master-0 kubenswrapper[33867]: I0219 03:41:03.261130 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261232 master-0 kubenswrapper[33867]: I0219 03:41:03.261170 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261232 master-0 kubenswrapper[33867]: I0219 03:41:03.261221 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.261647 master-0 kubenswrapper[33867]: I0219 03:41:03.261612 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.261716 master-0 kubenswrapper[33867]: I0219 03:41:03.261619 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.261766 master-0 kubenswrapper[33867]: I0219 03:41:03.261753 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261812 master-0 kubenswrapper[33867]: I0219 03:41:03.261793 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.261854 master-0 kubenswrapper[33867]: I0219 03:41:03.261823 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlzd\" (UniqueName: \"kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.261905 master-0 kubenswrapper[33867]: I0219 03:41:03.261877 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnv9m\" (UniqueName: \"kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.261952 master-0 kubenswrapper[33867]: I0219 03:41:03.261902 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.261952 master-0 kubenswrapper[33867]: I0219 03:41:03.261944 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262057 master-0 kubenswrapper[33867]: I0219 03:41:03.262031 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.262650 master-0 kubenswrapper[33867]: I0219 03:41:03.262617 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262718 master-0 kubenswrapper[33867]: I0219 03:41:03.262660 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.262718 master-0 kubenswrapper[33867]: I0219 03:41:03.262516 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262718 master-0 kubenswrapper[33867]: I0219 03:41:03.262521 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.262808 master-0 kubenswrapper[33867]: I0219 03:41:03.262314 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262808 master-0 kubenswrapper[33867]: I0219 03:41:03.262770 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262808 master-0 kubenswrapper[33867]: I0219 03:41:03.262794 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.262922 master-0 kubenswrapper[33867]: I0219 03:41:03.262839 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.262922 master-0 kubenswrapper[33867]: I0219 03:41:03.262873 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.262922 master-0 kubenswrapper[33867]: I0219 03:41:03.262895 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.263014 master-0 kubenswrapper[33867]: I0219 03:41:03.262954 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263014 master-0 kubenswrapper[33867]: I0219 03:41:03.262962 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.263078 master-0 kubenswrapper[33867]: I0219 03:41:03.262981 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvn6b\" (UniqueName: \"kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.263119 master-0 kubenswrapper[33867]: I0219 03:41:03.263081 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.263163 master-0 kubenswrapper[33867]: I0219 03:41:03.263115 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263329 master-0 kubenswrapper[33867]: I0219 03:41:03.263213 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.263329 master-0 kubenswrapper[33867]: I0219 03:41:03.263317 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.263514 master-0 kubenswrapper[33867]: I0219 03:41:03.263474 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263577 master-0 kubenswrapper[33867]: I0219 03:41:03.263527 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263659 master-0 kubenswrapper[33867]: I0219 03:41:03.263619 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263717 master-0 kubenswrapper[33867]: I0219 03:41:03.263690 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.263752 master-0 kubenswrapper[33867]: I0219 03:41:03.263730 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.264603 master-0 kubenswrapper[33867]: I0219 03:41:03.264195 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.264603 master-0 kubenswrapper[33867]: I0219 03:41:03.264569 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.266413 master-0 kubenswrapper[33867]: I0219 03:41:03.265365 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.271276 master-0 kubenswrapper[33867]: I0219 03:41:03.269591 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.285872 master-0 kubenswrapper[33867]: I0219 03:41:03.271626 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.285872 master-0 kubenswrapper[33867]: I0219 03:41:03.275731 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:03.285872 master-0 kubenswrapper[33867]: I0219 03:41:03.285554 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnv9m\" (UniqueName: \"kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.297287 master-0 kubenswrapper[33867]: I0219 03:41:03.286785 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvn6b\" (UniqueName: \"kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b\") pod \"dnsmasq-dns-5599dc5fdc-wpfjn\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.367182 master-0 kubenswrapper[33867]: I0219 03:41:03.366795 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367182 master-0 kubenswrapper[33867]: I0219 03:41:03.366967 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367182 master-0 kubenswrapper[33867]: I0219 03:41:03.367073 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367182 master-0 kubenswrapper[33867]: I0219 03:41:03.367122 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhlzd\" (UniqueName: \"kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367182 master-0 kubenswrapper[33867]: I0219 03:41:03.367170 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367577 master-0 kubenswrapper[33867]: I0219 03:41:03.367394 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367577 master-0 kubenswrapper[33867]: I0219 03:41:03.367464 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367577 master-0 kubenswrapper[33867]: I0219 03:41:03.367534 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367681 master-0 kubenswrapper[33867]: I0219 03:41:03.367597 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367681 master-0 kubenswrapper[33867]: I0219 03:41:03.367627 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367748 master-0 kubenswrapper[33867]: I0219 03:41:03.367729 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367786 master-0 kubenswrapper[33867]: I0219 03:41:03.367757 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367819 master-0 kubenswrapper[33867]: I0219 03:41:03.367797 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.367853 master-0 kubenswrapper[33867]: I0219 03:41:03.367823 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368009 master-0 kubenswrapper[33867]: I0219 03:41:03.367974 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368009 master-0 kubenswrapper[33867]: I0219 03:41:03.368003 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368092 master-0 kubenswrapper[33867]: I0219 03:41:03.368064 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368092 master-0 kubenswrapper[33867]: I0219 03:41:03.368080 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368280 master-0 kubenswrapper[33867]: I0219 03:41:03.368226 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368374 master-0 kubenswrapper[33867]: I0219 03:41:03.368350 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368415 master-0 kubenswrapper[33867]: I0219 03:41:03.368286 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368415 master-0 kubenswrapper[33867]: I0219 03:41:03.368306 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368510 master-0 kubenswrapper[33867]: I0219 03:41:03.368281 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368595 master-0 kubenswrapper[33867]: I0219 03:41:03.368578 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.368670 master-0 kubenswrapper[33867]: I0219 03:41:03.368656 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.371393 master-0 kubenswrapper[33867]: I0219 03:41:03.371313 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.373708 master-0 kubenswrapper[33867]: I0219 03:41:03.373686 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.385292 master-0 kubenswrapper[33867]: I0219 03:41:03.378940 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.385292 master-0 kubenswrapper[33867]: I0219 03:41:03.382536 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.385292 master-0 kubenswrapper[33867]: I0219 03:41:03.384875 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:03.397811 master-0 kubenswrapper[33867]: I0219 03:41:03.387745 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.409184 master-0 kubenswrapper[33867]: I0219 03:41:03.404987 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-api-config-data" Feb 19 03:41:03.412045 master-0 kubenswrapper[33867]: I0219 03:41:03.410764 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:03.420385 master-0 kubenswrapper[33867]: I0219 03:41:03.419659 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhlzd\" (UniqueName: \"kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd\") pod \"cinder-054a4-backup-0\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.420641 master-0 kubenswrapper[33867]: I0219 03:41:03.420609 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:03.420877 master-0 kubenswrapper[33867]: I0219 03:41:03.420805 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:03.470631 master-0 kubenswrapper[33867]: I0219 03:41:03.470525 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577410 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577554 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577597 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577621 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577658 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577708 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqb5\" (UniqueName: \"kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.580240 master-0 kubenswrapper[33867]: I0219 03:41:03.577762 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.681027 master-0 kubenswrapper[33867]: I0219 03:41:03.680960 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.681611 master-0 kubenswrapper[33867]: I0219 03:41:03.681585 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.681879 master-0 kubenswrapper[33867]: I0219 03:41:03.681857 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.682014 master-0 kubenswrapper[33867]: I0219 03:41:03.681994 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.682170 master-0 kubenswrapper[33867]: I0219 03:41:03.682145 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.682370 master-0 kubenswrapper[33867]: I0219 03:41:03.682350 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fqb5\" (UniqueName: \"kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.682552 master-0 kubenswrapper[33867]: I0219 03:41:03.682534 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.685485 master-0 kubenswrapper[33867]: I0219 03:41:03.684997 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.686682 master-0 kubenswrapper[33867]: I0219 03:41:03.686642 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.688682 master-0 kubenswrapper[33867]: I0219 03:41:03.688125 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.690174 master-0 kubenswrapper[33867]: I0219 03:41:03.690130 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.694418 master-0 kubenswrapper[33867]: I0219 03:41:03.694347 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.696157 master-0 kubenswrapper[33867]: I0219 03:41:03.696104 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.717085 master-0 kubenswrapper[33867]: I0219 03:41:03.717039 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fqb5\" (UniqueName: \"kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5\") pod \"cinder-054a4-api-0\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.787863 master-0 kubenswrapper[33867]: I0219 03:41:03.787780 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:03.949106 master-0 kubenswrapper[33867]: I0219 03:41:03.947803 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:04.339703 master-0 kubenswrapper[33867]: I0219 03:41:04.339636 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:41:04.408368 master-0 kubenswrapper[33867]: I0219 03:41:04.408296 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config\") pod \"b067fa1c-719d-41db-a4be-d5d7d1125a67\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " Feb 19 03:41:04.408620 master-0 kubenswrapper[33867]: I0219 03:41:04.408549 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2n9b\" (UniqueName: \"kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b\") pod \"b067fa1c-719d-41db-a4be-d5d7d1125a67\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " Feb 19 03:41:04.408768 master-0 kubenswrapper[33867]: I0219 03:41:04.408738 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle\") pod \"b067fa1c-719d-41db-a4be-d5d7d1125a67\" (UID: \"b067fa1c-719d-41db-a4be-d5d7d1125a67\") " Feb 19 03:41:04.419674 master-0 kubenswrapper[33867]: I0219 03:41:04.419547 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b" (OuterVolumeSpecName: "kube-api-access-j2n9b") pod "b067fa1c-719d-41db-a4be-d5d7d1125a67" (UID: "b067fa1c-719d-41db-a4be-d5d7d1125a67"). InnerVolumeSpecName "kube-api-access-j2n9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:04.438513 master-0 kubenswrapper[33867]: I0219 03:41:04.438327 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config" (OuterVolumeSpecName: "config") pod "b067fa1c-719d-41db-a4be-d5d7d1125a67" (UID: "b067fa1c-719d-41db-a4be-d5d7d1125a67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:04.455690 master-0 kubenswrapper[33867]: I0219 03:41:04.455555 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b067fa1c-719d-41db-a4be-d5d7d1125a67" (UID: "b067fa1c-719d-41db-a4be-d5d7d1125a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:04.511565 master-0 kubenswrapper[33867]: I0219 03:41:04.511159 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2n9b\" (UniqueName: \"kubernetes.io/projected/b067fa1c-719d-41db-a4be-d5d7d1125a67-kube-api-access-j2n9b\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:04.511565 master-0 kubenswrapper[33867]: I0219 03:41:04.511196 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:04.511565 master-0 kubenswrapper[33867]: I0219 03:41:04.511208 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b067fa1c-719d-41db-a4be-d5d7d1125a67-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:04.522352 master-0 kubenswrapper[33867]: I0219 03:41:04.522293 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:04.532968 master-0 kubenswrapper[33867]: W0219 03:41:04.530455 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bf517d1_637f_48a9_b008_b0efe070ed50.slice/crio-3360bc61512356eeaa139def7e2766a8fcfad6e4077addb137e5f08b63aac2aa WatchSource:0}: Error finding container 3360bc61512356eeaa139def7e2766a8fcfad6e4077addb137e5f08b63aac2aa: Status 404 returned error can't find the container with id 3360bc61512356eeaa139def7e2766a8fcfad6e4077addb137e5f08b63aac2aa Feb 19 03:41:04.549315 master-0 kubenswrapper[33867]: I0219 03:41:04.549217 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:04.596343 master-0 kubenswrapper[33867]: I0219 03:41:04.596246 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cwnd9" Feb 19 03:41:04.596618 master-0 kubenswrapper[33867]: I0219 03:41:04.596443 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cwnd9" event={"ID":"b067fa1c-719d-41db-a4be-d5d7d1125a67","Type":"ContainerDied","Data":"594c5f165469392c88eb7980172d433721359d7c3dbbd427d70addd011d0c09f"} Feb 19 03:41:04.596618 master-0 kubenswrapper[33867]: I0219 03:41:04.596483 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594c5f165469392c88eb7980172d433721359d7c3dbbd427d70addd011d0c09f" Feb 19 03:41:04.600527 master-0 kubenswrapper[33867]: I0219 03:41:04.600479 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerStarted","Data":"3360bc61512356eeaa139def7e2766a8fcfad6e4077addb137e5f08b63aac2aa"} Feb 19 03:41:04.602505 master-0 kubenswrapper[33867]: I0219 03:41:04.602484 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" event={"ID":"37c1b200-08de-46e3-9588-20ee09a017da","Type":"ContainerStarted","Data":"ca98cb2c7378f75e020fad438b07084af147aa04e02b8837ed54d13688c61464"} Feb 19 03:41:04.605148 master-0 kubenswrapper[33867]: I0219 03:41:04.604650 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerStarted","Data":"0587704518b65e5f839a1681e4886be8e3b63fac7e2ab6b054a7f84768ea8171"} Feb 19 03:41:04.682376 master-0 kubenswrapper[33867]: I0219 03:41:04.682312 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:04.702766 master-0 kubenswrapper[33867]: I0219 03:41:04.702641 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:04.834505 master-0 kubenswrapper[33867]: I0219 03:41:04.834434 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:04.863355 master-0 kubenswrapper[33867]: I0219 03:41:04.860327 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:04.863355 master-0 kubenswrapper[33867]: E0219 03:41:04.861072 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b067fa1c-719d-41db-a4be-d5d7d1125a67" containerName="neutron-db-sync" Feb 19 03:41:04.863355 master-0 kubenswrapper[33867]: I0219 03:41:04.861086 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b067fa1c-719d-41db-a4be-d5d7d1125a67" containerName="neutron-db-sync" Feb 19 03:41:04.863355 master-0 kubenswrapper[33867]: I0219 03:41:04.861422 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b067fa1c-719d-41db-a4be-d5d7d1125a67" containerName="neutron-db-sync" Feb 19 03:41:04.863355 master-0 kubenswrapper[33867]: I0219 03:41:04.862641 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.905273 master-0 kubenswrapper[33867]: I0219 03:41:04.903073 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926523 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926597 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926756 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926861 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6j85\" (UniqueName: \"kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926908 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:04.927637 master-0 kubenswrapper[33867]: I0219 03:41:04.926929 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028694 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028813 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6j85\" (UniqueName: \"kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028874 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028892 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028970 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.029803 master-0 kubenswrapper[33867]: I0219 03:41:05.028989 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.031300 master-0 kubenswrapper[33867]: I0219 03:41:05.031226 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.031814 master-0 kubenswrapper[33867]: I0219 03:41:05.031767 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.032096 master-0 kubenswrapper[33867]: I0219 03:41:05.032054 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.032624 master-0 kubenswrapper[33867]: I0219 03:41:05.032580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.034359 master-0 kubenswrapper[33867]: I0219 03:41:05.034294 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.056041 master-0 kubenswrapper[33867]: I0219 03:41:05.055962 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6j85\" (UniqueName: \"kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85\") pod \"dnsmasq-dns-8f98b7745-89hd2\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.131223 master-0 kubenswrapper[33867]: I0219 03:41:05.131167 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:05.138704 master-0 kubenswrapper[33867]: I0219 03:41:05.133766 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.138704 master-0 kubenswrapper[33867]: I0219 03:41:05.138341 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 03:41:05.138704 master-0 kubenswrapper[33867]: I0219 03:41:05.138379 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 03:41:05.138704 master-0 kubenswrapper[33867]: I0219 03:41:05.138556 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 03:41:05.212893 master-0 kubenswrapper[33867]: I0219 03:41:05.210558 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:05.227935 master-0 kubenswrapper[33867]: I0219 03:41:05.226818 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:05.238307 master-0 kubenswrapper[33867]: I0219 03:41:05.238083 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.238307 master-0 kubenswrapper[33867]: I0219 03:41:05.238181 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.238588 master-0 kubenswrapper[33867]: I0219 03:41:05.238339 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.239023 master-0 kubenswrapper[33867]: I0219 03:41:05.238821 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhc9\" (UniqueName: \"kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.239023 master-0 kubenswrapper[33867]: I0219 03:41:05.238902 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.342497 master-0 kubenswrapper[33867]: I0219 03:41:05.341712 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmhc9\" (UniqueName: \"kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.342497 master-0 kubenswrapper[33867]: I0219 03:41:05.341808 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.342497 master-0 kubenswrapper[33867]: I0219 03:41:05.341878 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.342497 master-0 kubenswrapper[33867]: I0219 03:41:05.341909 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.342497 master-0 kubenswrapper[33867]: I0219 03:41:05.341948 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.350925 master-0 kubenswrapper[33867]: I0219 03:41:05.348371 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.350925 master-0 kubenswrapper[33867]: I0219 03:41:05.348749 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.353071 master-0 kubenswrapper[33867]: I0219 03:41:05.352877 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.354522 master-0 kubenswrapper[33867]: I0219 03:41:05.354457 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.365788 master-0 kubenswrapper[33867]: I0219 03:41:05.364370 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmhc9\" (UniqueName: \"kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9\") pod \"neutron-8bf57b44-qh2fj\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.464532 master-0 kubenswrapper[33867]: I0219 03:41:05.462915 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:05.734445 master-0 kubenswrapper[33867]: I0219 03:41:05.734072 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerStarted","Data":"6f9001c4038c200f6ff3d559aed05cc0d03b3c2bebd20d5d3f5acd793842c7e2"} Feb 19 03:41:05.774586 master-0 kubenswrapper[33867]: I0219 03:41:05.774490 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:05.828515 master-0 kubenswrapper[33867]: I0219 03:41:05.827331 33867 generic.go:334] "Generic (PLEG): container finished" podID="37c1b200-08de-46e3-9588-20ee09a017da" containerID="863b9f430ce1c084fcd81a7bad8a54a044d387d446f892106d039a77765f9290" exitCode=0 Feb 19 03:41:05.828515 master-0 kubenswrapper[33867]: I0219 03:41:05.827424 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" event={"ID":"37c1b200-08de-46e3-9588-20ee09a017da","Type":"ContainerDied","Data":"863b9f430ce1c084fcd81a7bad8a54a044d387d446f892106d039a77765f9290"} Feb 19 03:41:05.834055 master-0 kubenswrapper[33867]: I0219 03:41:05.834012 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerStarted","Data":"4e0f58a532f0e71f9a3ace27decf6ae90427722aac600f768dc8bb2e441c8605"} Feb 19 03:41:06.127752 master-0 kubenswrapper[33867]: I0219 03:41:06.127055 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:06.483764 master-0 kubenswrapper[33867]: W0219 03:41:06.483226 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb23c38ff_0149_4b73_a4dd_f6aae99512d0.slice/crio-2b03a2337cbb270548325068ce0823a9dd6ac89d2a86526ce6e114e6df4054c6 WatchSource:0}: Error finding container 2b03a2337cbb270548325068ce0823a9dd6ac89d2a86526ce6e114e6df4054c6: Status 404 returned error can't find the container with id 2b03a2337cbb270548325068ce0823a9dd6ac89d2a86526ce6e114e6df4054c6 Feb 19 03:41:06.491176 master-0 kubenswrapper[33867]: I0219 03:41:06.491118 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:06.704887 master-0 kubenswrapper[33867]: I0219 03:41:06.704557 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:06.860513 master-0 kubenswrapper[33867]: I0219 03:41:06.855041 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" event={"ID":"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84","Type":"ContainerStarted","Data":"010e0e66bee44ab3f4353950152ea88e9a7b83d09d2a055e8983335b4dfc6d79"} Feb 19 03:41:06.860513 master-0 kubenswrapper[33867]: I0219 03:41:06.856914 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerStarted","Data":"fad4c7c608885ad86cfe4ba3d329b50d6c9fba2b32b05deaa92f84daae1fac83"} Feb 19 03:41:06.860513 master-0 kubenswrapper[33867]: I0219 03:41:06.859596 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerStarted","Data":"478c31c276f9022a5870fc83f58d6f9fdcecb2fa0129b84e9b9d9edd9a1e3c2e"} Feb 19 03:41:06.865333 master-0 kubenswrapper[33867]: I0219 03:41:06.862520 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" event={"ID":"37c1b200-08de-46e3-9588-20ee09a017da","Type":"ContainerDied","Data":"ca98cb2c7378f75e020fad438b07084af147aa04e02b8837ed54d13688c61464"} Feb 19 03:41:06.865333 master-0 kubenswrapper[33867]: I0219 03:41:06.862579 33867 scope.go:117] "RemoveContainer" containerID="863b9f430ce1c084fcd81a7bad8a54a044d387d446f892106d039a77765f9290" Feb 19 03:41:06.865333 master-0 kubenswrapper[33867]: I0219 03:41:06.862784 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5599dc5fdc-wpfjn" Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.878590 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.878949 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvn6b\" (UniqueName: \"kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.879023 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.879090 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.879135 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.879180 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config\") pod \"37c1b200-08de-46e3-9588-20ee09a017da\" (UID: \"37c1b200-08de-46e3-9588-20ee09a017da\") " Feb 19 03:41:06.881517 master-0 kubenswrapper[33867]: I0219 03:41:06.879836 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerStarted","Data":"2b03a2337cbb270548325068ce0823a9dd6ac89d2a86526ce6e114e6df4054c6"} Feb 19 03:41:06.883533 master-0 kubenswrapper[33867]: I0219 03:41:06.883461 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b" (OuterVolumeSpecName: "kube-api-access-zvn6b") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "kube-api-access-zvn6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:06.983371 master-0 kubenswrapper[33867]: I0219 03:41:06.983309 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvn6b\" (UniqueName: \"kubernetes.io/projected/37c1b200-08de-46e3-9588-20ee09a017da-kube-api-access-zvn6b\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.264195 master-0 kubenswrapper[33867]: I0219 03:41:07.263767 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:07.270384 master-0 kubenswrapper[33867]: I0219 03:41:07.268964 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config" (OuterVolumeSpecName: "config") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:07.292642 master-0 kubenswrapper[33867]: I0219 03:41:07.291009 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:07.301756 master-0 kubenswrapper[33867]: I0219 03:41:07.293394 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:07.301756 master-0 kubenswrapper[33867]: I0219 03:41:07.296091 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37c1b200-08de-46e3-9588-20ee09a017da" (UID: "37c1b200-08de-46e3-9588-20ee09a017da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:07.312041 master-0 kubenswrapper[33867]: I0219 03:41:07.308839 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.312041 master-0 kubenswrapper[33867]: I0219 03:41:07.308901 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.327301 master-0 kubenswrapper[33867]: I0219 03:41:07.318843 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.327301 master-0 kubenswrapper[33867]: I0219 03:41:07.318882 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.432289 master-0 kubenswrapper[33867]: I0219 03:41:07.428096 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37c1b200-08de-46e3-9588-20ee09a017da-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:07.812133 master-0 kubenswrapper[33867]: I0219 03:41:07.810685 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:07.822110 master-0 kubenswrapper[33867]: I0219 03:41:07.821611 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5599dc5fdc-wpfjn"] Feb 19 03:41:07.918349 master-0 kubenswrapper[33867]: I0219 03:41:07.916378 33867 generic.go:334] "Generic (PLEG): container finished" podID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerID="0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a" exitCode=0 Feb 19 03:41:07.918349 master-0 kubenswrapper[33867]: I0219 03:41:07.916470 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" event={"ID":"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84","Type":"ContainerDied","Data":"0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a"} Feb 19 03:41:07.950403 master-0 kubenswrapper[33867]: I0219 03:41:07.943904 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerStarted","Data":"31b94dbc9e66521577f65e7fa5e33f4cf0b24405c1213eb016faddf9577c1f2d"} Feb 19 03:41:07.966295 master-0 kubenswrapper[33867]: I0219 03:41:07.963819 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerStarted","Data":"02d8c4a7ba4a68827423bebed8062278759249b93f8d9e239c301d82506a22cf"} Feb 19 03:41:07.966295 master-0 kubenswrapper[33867]: I0219 03:41:07.965028 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:07.966295 master-0 kubenswrapper[33867]: I0219 03:41:07.964946 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-api-0" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-api" containerID="cri-o://02d8c4a7ba4a68827423bebed8062278759249b93f8d9e239c301d82506a22cf" gracePeriod=30 Feb 19 03:41:07.966295 master-0 kubenswrapper[33867]: I0219 03:41:07.964653 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-api-0" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-054a4-api-log" containerID="cri-o://478c31c276f9022a5870fc83f58d6f9fdcecb2fa0129b84e9b9d9edd9a1e3c2e" gracePeriod=30 Feb 19 03:41:07.998289 master-0 kubenswrapper[33867]: I0219 03:41:07.996929 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerStarted","Data":"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf"} Feb 19 03:41:07.998289 master-0 kubenswrapper[33867]: I0219 03:41:07.997013 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerStarted","Data":"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584"} Feb 19 03:41:08.005056 master-0 kubenswrapper[33867]: I0219 03:41:08.001578 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerStarted","Data":"a14fd526c0f0bc6abd26f9706021df407bb2614e997ea965690fdeaef153bf7d"} Feb 19 03:41:08.005056 master-0 kubenswrapper[33867]: I0219 03:41:08.001617 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerStarted","Data":"b1a122d0f945bf5254ddc70fbcf28ed8ce928b8999ecb30e5f20bd8a2a10bc62"} Feb 19 03:41:08.005056 master-0 kubenswrapper[33867]: I0219 03:41:08.002371 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:08.010284 master-0 kubenswrapper[33867]: I0219 03:41:08.008830 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" podStartSLOduration=4.894647615 podStartE2EDuration="6.008790435s" podCreationTimestamp="2026-02-19 03:41:02 +0000 UTC" firstStartedPulling="2026-02-19 03:41:04.537685148 +0000 UTC m=+1069.834355759" lastFinishedPulling="2026-02-19 03:41:05.651827968 +0000 UTC m=+1070.948498579" observedRunningTime="2026-02-19 03:41:07.987919724 +0000 UTC m=+1073.284590335" watchObservedRunningTime="2026-02-19 03:41:08.008790435 +0000 UTC m=+1073.305461046" Feb 19 03:41:08.020871 master-0 kubenswrapper[33867]: I0219 03:41:08.020430 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerStarted","Data":"99a27c5571bd7a78772f28a63d27ff56a44e8e943947da94d146e726c617c2f1"} Feb 19 03:41:08.046300 master-0 kubenswrapper[33867]: I0219 03:41:08.045395 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-api-0" podStartSLOduration=5.045375871 podStartE2EDuration="5.045375871s" podCreationTimestamp="2026-02-19 03:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:08.014486236 +0000 UTC m=+1073.311156847" watchObservedRunningTime="2026-02-19 03:41:08.045375871 +0000 UTC m=+1073.342046482" Feb 19 03:41:08.066298 master-0 kubenswrapper[33867]: I0219 03:41:08.059149 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-backup-0" podStartSLOduration=4.062503218 podStartE2EDuration="5.05912083s" podCreationTimestamp="2026-02-19 03:41:03 +0000 UTC" firstStartedPulling="2026-02-19 03:41:04.963837356 +0000 UTC m=+1070.260507967" lastFinishedPulling="2026-02-19 03:41:05.960454968 +0000 UTC m=+1071.257125579" observedRunningTime="2026-02-19 03:41:08.051853574 +0000 UTC m=+1073.348524185" watchObservedRunningTime="2026-02-19 03:41:08.05912083 +0000 UTC m=+1073.355791441" Feb 19 03:41:08.122506 master-0 kubenswrapper[33867]: I0219 03:41:08.120154 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8bf57b44-qh2fj" podStartSLOduration=3.120131768 podStartE2EDuration="3.120131768s" podCreationTimestamp="2026-02-19 03:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:08.10324608 +0000 UTC m=+1073.399916691" watchObservedRunningTime="2026-02-19 03:41:08.120131768 +0000 UTC m=+1073.416802379" Feb 19 03:41:08.423284 master-0 kubenswrapper[33867]: I0219 03:41:08.421155 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:08.472496 master-0 kubenswrapper[33867]: I0219 03:41:08.472082 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:08.973424 master-0 kubenswrapper[33867]: I0219 03:41:08.973252 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c1b200-08de-46e3-9588-20ee09a017da" path="/var/lib/kubelet/pods/37c1b200-08de-46e3-9588-20ee09a017da/volumes" Feb 19 03:41:09.037213 master-0 kubenswrapper[33867]: I0219 03:41:09.037122 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerStarted","Data":"5a5cf786965b4af8c0d923c5431217a2fd76231f134891dc6526a581c0b307df"} Feb 19 03:41:09.044129 master-0 kubenswrapper[33867]: I0219 03:41:09.044066 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" event={"ID":"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84","Type":"ContainerStarted","Data":"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5"} Feb 19 03:41:09.044580 master-0 kubenswrapper[33867]: I0219 03:41:09.044564 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:09.049289 master-0 kubenswrapper[33867]: I0219 03:41:09.049004 33867 generic.go:334] "Generic (PLEG): container finished" podID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerID="478c31c276f9022a5870fc83f58d6f9fdcecb2fa0129b84e9b9d9edd9a1e3c2e" exitCode=143 Feb 19 03:41:09.049533 master-0 kubenswrapper[33867]: I0219 03:41:09.049444 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerDied","Data":"478c31c276f9022a5870fc83f58d6f9fdcecb2fa0129b84e9b9d9edd9a1e3c2e"} Feb 19 03:41:09.128083 master-0 kubenswrapper[33867]: I0219 03:41:09.127986 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-scheduler-0" podStartSLOduration=6.144781726 podStartE2EDuration="7.127956758s" podCreationTimestamp="2026-02-19 03:41:02 +0000 UTC" firstStartedPulling="2026-02-19 03:41:04.005364824 +0000 UTC m=+1069.302035435" lastFinishedPulling="2026-02-19 03:41:04.988539866 +0000 UTC m=+1070.285210467" observedRunningTime="2026-02-19 03:41:09.069467521 +0000 UTC m=+1074.366138132" watchObservedRunningTime="2026-02-19 03:41:09.127956758 +0000 UTC m=+1074.424627369" Feb 19 03:41:09.142299 master-0 kubenswrapper[33867]: I0219 03:41:09.138635 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" podStartSLOduration=5.138613269 podStartE2EDuration="5.138613269s" podCreationTimestamp="2026-02-19 03:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:09.110939786 +0000 UTC m=+1074.407610397" watchObservedRunningTime="2026-02-19 03:41:09.138613269 +0000 UTC m=+1074.435283880" Feb 19 03:41:09.216816 master-0 kubenswrapper[33867]: I0219 03:41:09.216737 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-747c56bd5-sdd55"] Feb 19 03:41:09.217955 master-0 kubenswrapper[33867]: E0219 03:41:09.217935 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c1b200-08de-46e3-9588-20ee09a017da" containerName="init" Feb 19 03:41:09.218037 master-0 kubenswrapper[33867]: I0219 03:41:09.218026 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c1b200-08de-46e3-9588-20ee09a017da" containerName="init" Feb 19 03:41:09.218425 master-0 kubenswrapper[33867]: I0219 03:41:09.218411 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c1b200-08de-46e3-9588-20ee09a017da" containerName="init" Feb 19 03:41:09.220010 master-0 kubenswrapper[33867]: I0219 03:41:09.219991 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.225033 master-0 kubenswrapper[33867]: I0219 03:41:09.222915 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 03:41:09.229277 master-0 kubenswrapper[33867]: I0219 03:41:09.226092 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 03:41:09.246677 master-0 kubenswrapper[33867]: I0219 03:41:09.242570 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-747c56bd5-sdd55"] Feb 19 03:41:09.326295 master-0 kubenswrapper[33867]: I0219 03:41:09.325945 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-ovndb-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.326295 master-0 kubenswrapper[33867]: I0219 03:41:09.326026 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-internal-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.326295 master-0 kubenswrapper[33867]: I0219 03:41:09.326074 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4skc\" (UniqueName: \"kubernetes.io/projected/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-kube-api-access-f4skc\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.326295 master-0 kubenswrapper[33867]: I0219 03:41:09.326124 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-combined-ca-bundle\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.326295 master-0 kubenswrapper[33867]: I0219 03:41:09.326180 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-public-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.336320 master-0 kubenswrapper[33867]: I0219 03:41:09.331199 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-httpd-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.336320 master-0 kubenswrapper[33867]: I0219 03:41:09.331371 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434288 master-0 kubenswrapper[33867]: I0219 03:41:09.434158 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4skc\" (UniqueName: \"kubernetes.io/projected/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-kube-api-access-f4skc\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434288 master-0 kubenswrapper[33867]: I0219 03:41:09.434281 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-combined-ca-bundle\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434696 master-0 kubenswrapper[33867]: I0219 03:41:09.434345 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-public-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434696 master-0 kubenswrapper[33867]: I0219 03:41:09.434411 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-httpd-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434696 master-0 kubenswrapper[33867]: I0219 03:41:09.434443 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434696 master-0 kubenswrapper[33867]: I0219 03:41:09.434602 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-ovndb-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.434696 master-0 kubenswrapper[33867]: I0219 03:41:09.434637 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-internal-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.439675 master-0 kubenswrapper[33867]: I0219 03:41:09.438371 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-internal-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.439675 master-0 kubenswrapper[33867]: I0219 03:41:09.438474 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-httpd-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.442034 master-0 kubenswrapper[33867]: I0219 03:41:09.441998 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-combined-ca-bundle\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.444291 master-0 kubenswrapper[33867]: I0219 03:41:09.442328 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-config\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.448824 master-0 kubenswrapper[33867]: I0219 03:41:09.444641 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-public-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.448824 master-0 kubenswrapper[33867]: I0219 03:41:09.447143 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-ovndb-tls-certs\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.457286 master-0 kubenswrapper[33867]: I0219 03:41:09.455800 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4skc\" (UniqueName: \"kubernetes.io/projected/b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d-kube-api-access-f4skc\") pod \"neutron-747c56bd5-sdd55\" (UID: \"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d\") " pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:09.554330 master-0 kubenswrapper[33867]: I0219 03:41:09.554139 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:10.411335 master-0 kubenswrapper[33867]: I0219 03:41:10.405802 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-747c56bd5-sdd55"] Feb 19 03:41:11.075946 master-0 kubenswrapper[33867]: I0219 03:41:11.075884 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-747c56bd5-sdd55" event={"ID":"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d","Type":"ContainerStarted","Data":"e56bd0908bb443af5b8616d21b86433ecb1047254195b6e72cd2e078dfeed761"} Feb 19 03:41:11.075946 master-0 kubenswrapper[33867]: I0219 03:41:11.075942 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-747c56bd5-sdd55" event={"ID":"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d","Type":"ContainerStarted","Data":"11985d75471d0cb3e87225dc0bdf6ca45a312d783ec4c6d4821548b72729fb98"} Feb 19 03:41:11.075946 master-0 kubenswrapper[33867]: I0219 03:41:11.075955 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-747c56bd5-sdd55" event={"ID":"b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d","Type":"ContainerStarted","Data":"1fe2f89c05dc729b4a49e64c4675c38b12ebb60616b60f4200271ce748cddfa5"} Feb 19 03:41:11.076309 master-0 kubenswrapper[33867]: I0219 03:41:11.076082 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:11.118588 master-0 kubenswrapper[33867]: I0219 03:41:11.118472 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-747c56bd5-sdd55" podStartSLOduration=2.118439456 podStartE2EDuration="2.118439456s" podCreationTimestamp="2026-02-19 03:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:11.108053252 +0000 UTC m=+1076.404723863" watchObservedRunningTime="2026-02-19 03:41:11.118439456 +0000 UTC m=+1076.415110067" Feb 19 03:41:13.277606 master-0 kubenswrapper[33867]: I0219 03:41:13.277540 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:13.534435 master-0 kubenswrapper[33867]: I0219 03:41:13.534268 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:13.676565 master-0 kubenswrapper[33867]: I0219 03:41:13.676507 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:13.745586 master-0 kubenswrapper[33867]: I0219 03:41:13.745505 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:13.768725 master-0 kubenswrapper[33867]: I0219 03:41:13.768103 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:13.853584 master-0 kubenswrapper[33867]: I0219 03:41:13.853393 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:14.138889 master-0 kubenswrapper[33867]: I0219 03:41:14.138809 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="cinder-volume" containerID="cri-o://fad4c7c608885ad86cfe4ba3d329b50d6c9fba2b32b05deaa92f84daae1fac83" gracePeriod=30 Feb 19 03:41:14.139142 master-0 kubenswrapper[33867]: I0219 03:41:14.139113 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-backup-0" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="cinder-backup" containerID="cri-o://0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf" gracePeriod=30 Feb 19 03:41:14.139366 master-0 kubenswrapper[33867]: I0219 03:41:14.139199 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="probe" containerID="cri-o://31b94dbc9e66521577f65e7fa5e33f4cf0b24405c1213eb016faddf9577c1f2d" gracePeriod=30 Feb 19 03:41:14.139463 master-0 kubenswrapper[33867]: I0219 03:41:14.139380 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-backup-0" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="probe" containerID="cri-o://daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584" gracePeriod=30 Feb 19 03:41:14.228049 master-0 kubenswrapper[33867]: I0219 03:41:14.227799 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:15.177521 master-0 kubenswrapper[33867]: I0219 03:41:15.168374 33867 generic.go:334] "Generic (PLEG): container finished" podID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerID="daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584" exitCode=0 Feb 19 03:41:15.177521 master-0 kubenswrapper[33867]: I0219 03:41:15.168470 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerDied","Data":"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584"} Feb 19 03:41:15.177521 master-0 kubenswrapper[33867]: I0219 03:41:15.173195 33867 generic.go:334] "Generic (PLEG): container finished" podID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerID="d4d68324cbf3d5d95dbb06b27c1427136717b42f247eef3684b268e8fc5d9241" exitCode=0 Feb 19 03:41:15.177521 master-0 kubenswrapper[33867]: I0219 03:41:15.173346 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-lr9n7" event={"ID":"52ede5f4-a9ae-46ab-a72c-6575bb04274e","Type":"ContainerDied","Data":"d4d68324cbf3d5d95dbb06b27c1427136717b42f247eef3684b268e8fc5d9241"} Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.181692 33867 generic.go:334] "Generic (PLEG): container finished" podID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerID="31b94dbc9e66521577f65e7fa5e33f4cf0b24405c1213eb016faddf9577c1f2d" exitCode=0 Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.181747 33867 generic.go:334] "Generic (PLEG): container finished" podID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerID="fad4c7c608885ad86cfe4ba3d329b50d6c9fba2b32b05deaa92f84daae1fac83" exitCode=0 Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.181793 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerDied","Data":"31b94dbc9e66521577f65e7fa5e33f4cf0b24405c1213eb016faddf9577c1f2d"} Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.181946 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerDied","Data":"fad4c7c608885ad86cfe4ba3d329b50d6c9fba2b32b05deaa92f84daae1fac83"} Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.182123 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-scheduler-0" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="cinder-scheduler" containerID="cri-o://99a27c5571bd7a78772f28a63d27ff56a44e8e943947da94d146e726c617c2f1" gracePeriod=30 Feb 19 03:41:15.182496 master-0 kubenswrapper[33867]: I0219 03:41:15.182176 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-054a4-scheduler-0" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="probe" containerID="cri-o://5a5cf786965b4af8c0d923c5431217a2fd76231f134891dc6526a581c0b307df" gracePeriod=30 Feb 19 03:41:15.240283 master-0 kubenswrapper[33867]: I0219 03:41:15.228504 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:15.334413 master-0 kubenswrapper[33867]: I0219 03:41:15.322522 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:41:15.334413 master-0 kubenswrapper[33867]: I0219 03:41:15.322828 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-576bc499-6mdnt" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="dnsmasq-dns" containerID="cri-o://4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968" gracePeriod=10 Feb 19 03:41:15.598632 master-0 kubenswrapper[33867]: I0219 03:41:15.586322 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:15.686671 master-0 kubenswrapper[33867]: I0219 03:41:15.686605 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.686671 master-0 kubenswrapper[33867]: I0219 03:41:15.686678 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.686942 master-0 kubenswrapper[33867]: I0219 03:41:15.686726 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.686942 master-0 kubenswrapper[33867]: I0219 03:41:15.686759 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.686942 master-0 kubenswrapper[33867]: I0219 03:41:15.686870 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.686942 master-0 kubenswrapper[33867]: I0219 03:41:15.686909 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687074 master-0 kubenswrapper[33867]: I0219 03:41:15.686954 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687223 master-0 kubenswrapper[33867]: I0219 03:41:15.687182 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687299 master-0 kubenswrapper[33867]: I0219 03:41:15.687270 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687334 master-0 kubenswrapper[33867]: I0219 03:41:15.687314 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687432 master-0 kubenswrapper[33867]: I0219 03:41:15.687372 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnv9m\" (UniqueName: \"kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687488 master-0 kubenswrapper[33867]: I0219 03:41:15.687440 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687593 master-0 kubenswrapper[33867]: I0219 03:41:15.687569 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687638 master-0 kubenswrapper[33867]: I0219 03:41:15.687617 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687672 master-0 kubenswrapper[33867]: I0219 03:41:15.687648 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi\") pod \"4bf517d1-637f-48a9-b008-b0efe070ed50\" (UID: \"4bf517d1-637f-48a9-b008-b0efe070ed50\") " Feb 19 03:41:15.687974 master-0 kubenswrapper[33867]: I0219 03:41:15.687893 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run" (OuterVolumeSpecName: "run") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.688041 master-0 kubenswrapper[33867]: I0219 03:41:15.688013 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.688674 master-0 kubenswrapper[33867]: I0219 03:41:15.688640 33867 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.688674 master-0 kubenswrapper[33867]: I0219 03:41:15.688669 33867 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.688771 master-0 kubenswrapper[33867]: I0219 03:41:15.688711 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.688771 master-0 kubenswrapper[33867]: I0219 03:41:15.688734 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys" (OuterVolumeSpecName: "sys") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.688930 master-0 kubenswrapper[33867]: I0219 03:41:15.688893 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.688972 master-0 kubenswrapper[33867]: I0219 03:41:15.688946 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.689010 master-0 kubenswrapper[33867]: I0219 03:41:15.688969 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev" (OuterVolumeSpecName: "dev") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.689416 master-0 kubenswrapper[33867]: I0219 03:41:15.689364 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.689479 master-0 kubenswrapper[33867]: I0219 03:41:15.689441 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.689515 master-0 kubenswrapper[33867]: I0219 03:41:15.689475 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:15.691796 master-0 kubenswrapper[33867]: I0219 03:41:15.691743 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:15.699673 master-0 kubenswrapper[33867]: I0219 03:41:15.699595 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m" (OuterVolumeSpecName: "kube-api-access-wnv9m") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "kube-api-access-wnv9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:15.700069 master-0 kubenswrapper[33867]: I0219 03:41:15.699882 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts" (OuterVolumeSpecName: "scripts") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791204 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791265 33867 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-sys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791276 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791289 33867 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-dev\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791299 33867 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791307 33867 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791316 33867 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791329 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnv9m\" (UniqueName: \"kubernetes.io/projected/4bf517d1-637f-48a9-b008-b0efe070ed50-kube-api-access-wnv9m\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791341 33867 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791352 33867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bf517d1-637f-48a9-b008-b0efe070ed50-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.791364 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.794425 master-0 kubenswrapper[33867]: I0219 03:41:15.794030 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:15.855621 master-0 kubenswrapper[33867]: I0219 03:41:15.854898 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data" (OuterVolumeSpecName: "config-data") pod "4bf517d1-637f-48a9-b008-b0efe070ed50" (UID: "4bf517d1-637f-48a9-b008-b0efe070ed50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:15.930411 master-0 kubenswrapper[33867]: I0219 03:41:15.915580 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:15.930411 master-0 kubenswrapper[33867]: I0219 03:41:15.915635 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf517d1-637f-48a9-b008-b0efe070ed50-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.064287 master-0 kubenswrapper[33867]: I0219 03:41:16.056167 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.119660 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.119842 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.119922 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.119974 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120015 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhlzd\" (UniqueName: \"kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120088 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120117 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120153 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120200 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120274 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120324 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.120363 master-0 kubenswrapper[33867]: I0219 03:41:16.120366 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120406 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run" (OuterVolumeSpecName: "run") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120451 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120489 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120492 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120487 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev" (OuterVolumeSpecName: "dev") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120528 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120558 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120566 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120585 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data\") pod \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\" (UID: \"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd\") " Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120658 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys" (OuterVolumeSpecName: "sys") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120682 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120669 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121125 master-0 kubenswrapper[33867]: I0219 03:41:16.120708 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121609 33867 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121634 33867 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121648 33867 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121695 33867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121713 33867 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-dev\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.121757 master-0 kubenswrapper[33867]: I0219 03:41:16.121726 33867 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.122021 master-0 kubenswrapper[33867]: I0219 03:41:16.121786 33867 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.122021 master-0 kubenswrapper[33867]: I0219 03:41:16.121822 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.122021 master-0 kubenswrapper[33867]: I0219 03:41:16.121854 33867 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.122021 master-0 kubenswrapper[33867]: I0219 03:41:16.121867 33867 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-sys\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.132298 master-0 kubenswrapper[33867]: I0219 03:41:16.129412 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts" (OuterVolumeSpecName: "scripts") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:16.132298 master-0 kubenswrapper[33867]: I0219 03:41:16.130204 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd" (OuterVolumeSpecName: "kube-api-access-hhlzd") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "kube-api-access-hhlzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:16.139277 master-0 kubenswrapper[33867]: I0219 03:41:16.135232 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:16.139277 master-0 kubenswrapper[33867]: I0219 03:41:16.137949 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:41:16.215401 master-0 kubenswrapper[33867]: I0219 03:41:16.214853 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"4bf517d1-637f-48a9-b008-b0efe070ed50","Type":"ContainerDied","Data":"3360bc61512356eeaa139def7e2766a8fcfad6e4077addb137e5f08b63aac2aa"} Feb 19 03:41:16.215401 master-0 kubenswrapper[33867]: I0219 03:41:16.214949 33867 scope.go:117] "RemoveContainer" containerID="31b94dbc9e66521577f65e7fa5e33f4cf0b24405c1213eb016faddf9577c1f2d" Feb 19 03:41:16.215401 master-0 kubenswrapper[33867]: I0219 03:41:16.215274 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.223721 master-0 kubenswrapper[33867]: I0219 03:41:16.223536 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.223721 master-0 kubenswrapper[33867]: I0219 03:41:16.223685 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.224452 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.224565 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.224652 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.224705 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wklx5\" (UniqueName: \"kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5\") pod \"d354f238-452a-4dd5-b466-5a88508156c7\" (UID: \"d354f238-452a-4dd5-b466-5a88508156c7\") " Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226451 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226479 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhlzd\" (UniqueName: \"kubernetes.io/projected/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-kube-api-access-hhlzd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226493 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226837 33867 generic.go:334] "Generic (PLEG): container finished" podID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerID="0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf" exitCode=0 Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226943 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerDied","Data":"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf"} Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.226974 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"55a1b23b-e8e5-430f-80c1-5542f3e1d7dd","Type":"ContainerDied","Data":"4e0f58a532f0e71f9a3ace27decf6ae90427722aac600f768dc8bb2e441c8605"} Feb 19 03:41:16.227420 master-0 kubenswrapper[33867]: I0219 03:41:16.227035 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.230553 master-0 kubenswrapper[33867]: I0219 03:41:16.230350 33867 generic.go:334] "Generic (PLEG): container finished" podID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerID="5a5cf786965b4af8c0d923c5431217a2fd76231f134891dc6526a581c0b307df" exitCode=0 Feb 19 03:41:16.230553 master-0 kubenswrapper[33867]: I0219 03:41:16.230448 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerDied","Data":"5a5cf786965b4af8c0d923c5431217a2fd76231f134891dc6526a581c0b307df"} Feb 19 03:41:16.230826 master-0 kubenswrapper[33867]: I0219 03:41:16.230710 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5" (OuterVolumeSpecName: "kube-api-access-wklx5") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "kube-api-access-wklx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:16.248489 master-0 kubenswrapper[33867]: I0219 03:41:16.248303 33867 generic.go:334] "Generic (PLEG): container finished" podID="d354f238-452a-4dd5-b466-5a88508156c7" containerID="4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968" exitCode=0 Feb 19 03:41:16.248744 master-0 kubenswrapper[33867]: I0219 03:41:16.248487 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576bc499-6mdnt" Feb 19 03:41:16.248744 master-0 kubenswrapper[33867]: I0219 03:41:16.248487 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576bc499-6mdnt" event={"ID":"d354f238-452a-4dd5-b466-5a88508156c7","Type":"ContainerDied","Data":"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968"} Feb 19 03:41:16.248744 master-0 kubenswrapper[33867]: I0219 03:41:16.248670 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576bc499-6mdnt" event={"ID":"d354f238-452a-4dd5-b466-5a88508156c7","Type":"ContainerDied","Data":"242f64607bf3698acb086ba0ca2f896c1831ca4423f20d502093e4667c0c983d"} Feb 19 03:41:16.249904 master-0 kubenswrapper[33867]: I0219 03:41:16.249448 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:16.309350 master-0 kubenswrapper[33867]: I0219 03:41:16.300883 33867 scope.go:117] "RemoveContainer" containerID="fad4c7c608885ad86cfe4ba3d329b50d6c9fba2b32b05deaa92f84daae1fac83" Feb 19 03:41:16.309350 master-0 kubenswrapper[33867]: I0219 03:41:16.307562 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:16.323218 master-0 kubenswrapper[33867]: I0219 03:41:16.322944 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:16.342484 master-0 kubenswrapper[33867]: I0219 03:41:16.330841 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wklx5\" (UniqueName: \"kubernetes.io/projected/d354f238-452a-4dd5-b466-5a88508156c7-kube-api-access-wklx5\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.342484 master-0 kubenswrapper[33867]: I0219 03:41:16.330908 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.369465 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370155 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="init" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370177 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="init" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370215 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="probe" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370227 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="probe" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370271 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="cinder-volume" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370279 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="cinder-volume" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370288 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="cinder-backup" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370296 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="cinder-backup" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370318 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="probe" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370327 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="probe" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: E0219 03:41:16.370345 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="dnsmasq-dns" Feb 19 03:41:16.371582 master-0 kubenswrapper[33867]: I0219 03:41:16.370353 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="dnsmasq-dns" Feb 19 03:41:16.396430 master-0 kubenswrapper[33867]: I0219 03:41:16.393014 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="probe" Feb 19 03:41:16.396430 master-0 kubenswrapper[33867]: I0219 03:41:16.393106 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="probe" Feb 19 03:41:16.396430 master-0 kubenswrapper[33867]: I0219 03:41:16.393144 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" containerName="cinder-volume" Feb 19 03:41:16.396430 master-0 kubenswrapper[33867]: I0219 03:41:16.393166 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d354f238-452a-4dd5-b466-5a88508156c7" containerName="dnsmasq-dns" Feb 19 03:41:16.396430 master-0 kubenswrapper[33867]: I0219 03:41:16.393206 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" containerName="cinder-backup" Feb 19 03:41:16.416695 master-0 kubenswrapper[33867]: I0219 03:41:16.405724 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:16.416695 master-0 kubenswrapper[33867]: I0219 03:41:16.405914 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.416695 master-0 kubenswrapper[33867]: I0219 03:41:16.410067 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-volume-lvm-iscsi-config-data" Feb 19 03:41:16.432838 master-0 kubenswrapper[33867]: I0219 03:41:16.418198 33867 scope.go:117] "RemoveContainer" containerID="daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584" Feb 19 03:41:16.432838 master-0 kubenswrapper[33867]: I0219 03:41:16.430962 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data" (OuterVolumeSpecName: "config-data") pod "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" (UID: "55a1b23b-e8e5-430f-80c1-5542f3e1d7dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:16.433455 master-0 kubenswrapper[33867]: I0219 03:41:16.433402 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.433518 master-0 kubenswrapper[33867]: I0219 03:41:16.433464 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.433697 master-0 kubenswrapper[33867]: I0219 03:41:16.433620 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/cd60be62-5e2e-4bee-a46e-a202e42adad9-kube-api-access-vf2g6\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.433829 master-0 kubenswrapper[33867]: I0219 03:41:16.433810 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.433916 master-0 kubenswrapper[33867]: I0219 03:41:16.433891 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434077 master-0 kubenswrapper[33867]: I0219 03:41:16.434054 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434469 master-0 kubenswrapper[33867]: I0219 03:41:16.434447 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434561 master-0 kubenswrapper[33867]: I0219 03:41:16.434525 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434680 master-0 kubenswrapper[33867]: I0219 03:41:16.434656 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434727 master-0 kubenswrapper[33867]: I0219 03:41:16.434706 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.434993 master-0 kubenswrapper[33867]: I0219 03:41:16.434959 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.435040 master-0 kubenswrapper[33867]: I0219 03:41:16.435002 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.435123 master-0 kubenswrapper[33867]: I0219 03:41:16.435088 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.435189 master-0 kubenswrapper[33867]: I0219 03:41:16.435141 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.435242 master-0 kubenswrapper[33867]: I0219 03:41:16.435219 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.435517 master-0 kubenswrapper[33867]: I0219 03:41:16.435490 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.450498 master-0 kubenswrapper[33867]: I0219 03:41:16.437997 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:16.450498 master-0 kubenswrapper[33867]: I0219 03:41:16.443899 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:16.457233 master-0 kubenswrapper[33867]: I0219 03:41:16.456323 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:16.461147 master-0 kubenswrapper[33867]: I0219 03:41:16.460929 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:16.485992 master-0 kubenswrapper[33867]: I0219 03:41:16.485908 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config" (OuterVolumeSpecName: "config") pod "d354f238-452a-4dd5-b466-5a88508156c7" (UID: "d354f238-452a-4dd5-b466-5a88508156c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:16.513579 master-0 kubenswrapper[33867]: I0219 03:41:16.513492 33867 scope.go:117] "RemoveContainer" containerID="0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.537577 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.537780 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.537899 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.537942 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538001 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538015 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-brick\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538108 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-iscsi\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538162 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538199 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538266 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/cd60be62-5e2e-4bee-a46e-a202e42adad9-kube-api-access-vf2g6\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538346 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538405 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538621 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538682 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538755 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-lib-modules\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538812 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-sys\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538870 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-locks-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538905 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-nvme\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538916 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-var-lib-cinder\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538705 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-etc-machine-id\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538938 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-run\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.538974 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cd60be62-5e2e-4bee-a46e-a202e42adad9-dev\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.539072 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.539130 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.543401 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-combined-ca-bundle\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.543930 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-scripts\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.544669 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.544756 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.545041 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.545061 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.545078 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d354f238-452a-4dd5-b466-5a88508156c7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.545094 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data-custom\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.547289 master-0 kubenswrapper[33867]: I0219 03:41:16.545725 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd60be62-5e2e-4bee-a46e-a202e42adad9-config-data\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.559694 master-0 kubenswrapper[33867]: I0219 03:41:16.559112 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/cd60be62-5e2e-4bee-a46e-a202e42adad9-kube-api-access-vf2g6\") pod \"cinder-054a4-volume-lvm-iscsi-0\" (UID: \"cd60be62-5e2e-4bee-a46e-a202e42adad9\") " pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.677127 master-0 kubenswrapper[33867]: I0219 03:41:16.677075 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:16.680075 master-0 kubenswrapper[33867]: I0219 03:41:16.679572 33867 scope.go:117] "RemoveContainer" containerID="daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584" Feb 19 03:41:16.683971 master-0 kubenswrapper[33867]: E0219 03:41:16.683880 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584\": container with ID starting with daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584 not found: ID does not exist" containerID="daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584" Feb 19 03:41:16.684088 master-0 kubenswrapper[33867]: I0219 03:41:16.684035 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584"} err="failed to get container status \"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584\": rpc error: code = NotFound desc = could not find container \"daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584\": container with ID starting with daa69e5e17412a808580739fa34bbc66aa2d3132baaa9999fc80d94d902cf584 not found: ID does not exist" Feb 19 03:41:16.684137 master-0 kubenswrapper[33867]: I0219 03:41:16.684086 33867 scope.go:117] "RemoveContainer" containerID="0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf" Feb 19 03:41:16.686749 master-0 kubenswrapper[33867]: E0219 03:41:16.686676 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf\": container with ID starting with 0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf not found: ID does not exist" containerID="0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf" Feb 19 03:41:16.686822 master-0 kubenswrapper[33867]: I0219 03:41:16.686761 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf"} err="failed to get container status \"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf\": rpc error: code = NotFound desc = could not find container \"0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf\": container with ID starting with 0fe52bc2b6e38f36ed2a06aed0f9b82a52793ddd4f5588473ad583cc2571fcaf not found: ID does not exist" Feb 19 03:41:16.686822 master-0 kubenswrapper[33867]: I0219 03:41:16.686807 33867 scope.go:117] "RemoveContainer" containerID="4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968" Feb 19 03:41:16.715660 master-0 kubenswrapper[33867]: I0219 03:41:16.709753 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:41:16.743294 master-0 kubenswrapper[33867]: I0219 03:41:16.726040 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:16.743793 master-0 kubenswrapper[33867]: I0219 03:41:16.743325 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-576bc499-6mdnt"] Feb 19 03:41:16.795286 master-0 kubenswrapper[33867]: I0219 03:41:16.788456 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:16.823667 master-0 kubenswrapper[33867]: I0219 03:41:16.821617 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:16.851791 master-0 kubenswrapper[33867]: I0219 03:41:16.847125 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:16.869431 master-0 kubenswrapper[33867]: I0219 03:41:16.869370 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.872587 master-0 kubenswrapper[33867]: I0219 03:41:16.872314 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-backup-config-data" Feb 19 03:41:16.932420 master-0 kubenswrapper[33867]: I0219 03:41:16.928500 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:41:16.953416 master-0 kubenswrapper[33867]: I0219 03:41:16.934667 33867 scope.go:117] "RemoveContainer" containerID="d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862" Feb 19 03:41:16.953416 master-0 kubenswrapper[33867]: I0219 03:41:16.936805 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.974875 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ts8z\" (UniqueName: \"kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.974991 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.975119 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.975140 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.975173 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.975190 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle\") pod \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\" (UID: \"52ede5f4-a9ae-46ab-a72c-6575bb04274e\") " Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.975794 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976421 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976473 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976514 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976536 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976566 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976599 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976617 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976643 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j742\" (UniqueName: \"kubernetes.io/projected/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-kube-api-access-6j742\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976667 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-sys\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976763 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976805 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976830 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976881 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976901 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-dev\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.976919 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-run\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.977027 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.980121 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z" (OuterVolumeSpecName: "kube-api-access-2ts8z") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "kube-api-access-2ts8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:16.994200 master-0 kubenswrapper[33867]: I0219 03:41:16.981308 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts" (OuterVolumeSpecName: "scripts") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:16.995768 master-0 kubenswrapper[33867]: I0219 03:41:16.995662 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 03:41:17.031820 master-0 kubenswrapper[33867]: I0219 03:41:17.031711 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data" (OuterVolumeSpecName: "config-data") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:17.039637 master-0 kubenswrapper[33867]: I0219 03:41:17.039564 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf517d1-637f-48a9-b008-b0efe070ed50" path="/var/lib/kubelet/pods/4bf517d1-637f-48a9-b008-b0efe070ed50/volumes" Feb 19 03:41:17.040776 master-0 kubenswrapper[33867]: I0219 03:41:17.040739 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a1b23b-e8e5-430f-80c1-5542f3e1d7dd" path="/var/lib/kubelet/pods/55a1b23b-e8e5-430f-80c1-5542f3e1d7dd/volumes" Feb 19 03:41:17.041833 master-0 kubenswrapper[33867]: I0219 03:41:17.041798 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d354f238-452a-4dd5-b466-5a88508156c7" path="/var/lib/kubelet/pods/d354f238-452a-4dd5-b466-5a88508156c7/volumes" Feb 19 03:41:17.079996 master-0 kubenswrapper[33867]: I0219 03:41:17.079165 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ede5f4-a9ae-46ab-a72c-6575bb04274e" (UID: "52ede5f4-a9ae-46ab-a72c-6575bb04274e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:17.081185 master-0 kubenswrapper[33867]: I0219 03:41:17.081036 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081185 master-0 kubenswrapper[33867]: I0219 03:41:17.081104 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-dev\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081185 master-0 kubenswrapper[33867]: I0219 03:41:17.081125 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-run\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081363 master-0 kubenswrapper[33867]: I0219 03:41:17.081293 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-lib-modules\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081467 master-0 kubenswrapper[33867]: I0219 03:41:17.081433 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081823 master-0 kubenswrapper[33867]: I0219 03:41:17.081618 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081887 master-0 kubenswrapper[33867]: I0219 03:41:17.081845 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.081983 master-0 kubenswrapper[33867]: I0219 03:41:17.081953 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.082099 master-0 kubenswrapper[33867]: I0219 03:41:17.082068 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.082220 master-0 kubenswrapper[33867]: I0219 03:41:17.082190 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.082325 master-0 kubenswrapper[33867]: I0219 03:41:17.082287 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.083025 master-0 kubenswrapper[33867]: I0219 03:41:17.082986 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j742\" (UniqueName: \"kubernetes.io/projected/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-kube-api-access-6j742\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.083116 master-0 kubenswrapper[33867]: I0219 03:41:17.083088 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-sys\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.083159 master-0 kubenswrapper[33867]: I0219 03:41:17.083125 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.083250 master-0 kubenswrapper[33867]: I0219 03:41:17.083222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.083362 master-0 kubenswrapper[33867]: I0219 03:41:17.083341 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083680 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ts8z\" (UniqueName: \"kubernetes.io/projected/52ede5f4-a9ae-46ab-a72c-6575bb04274e-kube-api-access-2ts8z\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083720 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083735 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083748 33867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52ede5f4-a9ae-46ab-a72c-6575bb04274e-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083761 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ede5f4-a9ae-46ab-a72c-6575bb04274e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083824 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-iscsi\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083888 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-dev\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.084041 master-0 kubenswrapper[33867]: I0219 03:41:17.083927 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-run\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.085644 master-0 kubenswrapper[33867]: I0219 03:41:17.085580 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-machine-id\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.086546 master-0 kubenswrapper[33867]: I0219 03:41:17.086406 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-sys\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.086546 master-0 kubenswrapper[33867]: I0219 03:41:17.086481 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.086631 master-0 kubenswrapper[33867]: I0219 03:41:17.086560 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-locks-brick\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.086631 master-0 kubenswrapper[33867]: I0219 03:41:17.086609 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-var-lib-cinder\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.087063 master-0 kubenswrapper[33867]: I0219 03:41:17.087021 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-etc-nvme\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.095031 master-0 kubenswrapper[33867]: I0219 03:41:17.094970 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data-custom\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.098939 master-0 kubenswrapper[33867]: I0219 03:41:17.098780 33867 scope.go:117] "RemoveContainer" containerID="4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968" Feb 19 03:41:17.116328 master-0 kubenswrapper[33867]: E0219 03:41:17.106369 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968\": container with ID starting with 4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968 not found: ID does not exist" containerID="4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968" Feb 19 03:41:17.116328 master-0 kubenswrapper[33867]: I0219 03:41:17.106415 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968"} err="failed to get container status \"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968\": rpc error: code = NotFound desc = could not find container \"4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968\": container with ID starting with 4ddfeee3572f09c06d13320afbfb6b8c4faa8d6911f8bffe16b84fc4d299d968 not found: ID does not exist" Feb 19 03:41:17.116328 master-0 kubenswrapper[33867]: I0219 03:41:17.106444 33867 scope.go:117] "RemoveContainer" containerID="d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862" Feb 19 03:41:17.116328 master-0 kubenswrapper[33867]: I0219 03:41:17.107460 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-config-data\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.120824 master-0 kubenswrapper[33867]: E0219 03:41:17.120736 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862\": container with ID starting with d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862 not found: ID does not exist" containerID="d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862" Feb 19 03:41:17.120824 master-0 kubenswrapper[33867]: I0219 03:41:17.120784 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862"} err="failed to get container status \"d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862\": rpc error: code = NotFound desc = could not find container \"d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862\": container with ID starting with d7ed6d68df400422f0df4b60b3c744cc562b817c410f3d5f73d3894b7b69f862 not found: ID does not exist" Feb 19 03:41:17.129664 master-0 kubenswrapper[33867]: I0219 03:41:17.129597 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-combined-ca-bundle\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.131048 master-0 kubenswrapper[33867]: I0219 03:41:17.131000 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-scripts\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.133190 master-0 kubenswrapper[33867]: I0219 03:41:17.133080 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j742\" (UniqueName: \"kubernetes.io/projected/00b58cd8-030f-4e5f-9808-edd4e1e31d8f-kube-api-access-6j742\") pod \"cinder-054a4-backup-0\" (UID: \"00b58cd8-030f-4e5f-9808-edd4e1e31d8f\") " pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.240303 master-0 kubenswrapper[33867]: I0219 03:41:17.239640 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:17.308197 master-0 kubenswrapper[33867]: I0219 03:41:17.308071 33867 generic.go:334] "Generic (PLEG): container finished" podID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerID="99a27c5571bd7a78772f28a63d27ff56a44e8e943947da94d146e726c617c2f1" exitCode=0 Feb 19 03:41:17.308197 master-0 kubenswrapper[33867]: I0219 03:41:17.308182 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerDied","Data":"99a27c5571bd7a78772f28a63d27ff56a44e8e943947da94d146e726c617c2f1"} Feb 19 03:41:17.312858 master-0 kubenswrapper[33867]: W0219 03:41:17.312770 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd60be62_5e2e_4bee_a46e_a202e42adad9.slice/crio-888a023f1217c8af2994d3eb79d7094ebd6e1f5c21ae4219daa41b7c6dd7762c WatchSource:0}: Error finding container 888a023f1217c8af2994d3eb79d7094ebd6e1f5c21ae4219daa41b7c6dd7762c: Status 404 returned error can't find the container with id 888a023f1217c8af2994d3eb79d7094ebd6e1f5c21ae4219daa41b7c6dd7762c Feb 19 03:41:17.313270 master-0 kubenswrapper[33867]: I0219 03:41:17.312893 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-lr9n7" event={"ID":"52ede5f4-a9ae-46ab-a72c-6575bb04274e","Type":"ContainerDied","Data":"3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5"} Feb 19 03:41:17.313270 master-0 kubenswrapper[33867]: I0219 03:41:17.312971 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3938d43fe1330922311ee7dd0656df6eda317edeed832490750b86272f109ed5" Feb 19 03:41:17.313270 master-0 kubenswrapper[33867]: I0219 03:41:17.312910 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-lr9n7" Feb 19 03:41:17.340285 master-0 kubenswrapper[33867]: I0219 03:41:17.338524 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-volume-lvm-iscsi-0"] Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: I0219 03:41:17.813242 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-4nkcc"] Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: E0219 03:41:17.814010 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerName="ironic-db-sync" Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: I0219 03:41:17.814030 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerName="ironic-db-sync" Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: E0219 03:41:17.814124 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerName="init" Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: I0219 03:41:17.814132 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerName="init" Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: I0219 03:41:17.835793 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" containerName="ironic-db-sync" Feb 19 03:41:17.853114 master-0 kubenswrapper[33867]: I0219 03:41:17.840552 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:17.881352 master-0 kubenswrapper[33867]: I0219 03:41:17.871997 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-4nkcc"] Feb 19 03:41:17.881352 master-0 kubenswrapper[33867]: I0219 03:41:17.877708 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.909204 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-64cdd9cf48-dg7ws"] Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: E0219 03:41:17.910623 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="cinder-scheduler" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.910649 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="cinder-scheduler" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: E0219 03:41:17.910672 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="probe" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.910682 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="probe" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.910903 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="probe" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.910928 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" containerName="cinder-scheduler" Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.912198 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-64cdd9cf48-dg7ws"] Feb 19 03:41:17.920286 master-0 kubenswrapper[33867]: I0219 03:41:17.912341 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.933567 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.934397 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.934507 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2q7r\" (UniqueName: \"kubernetes.io/projected/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-kube-api-access-n2q7r\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.934583 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkqw\" (UniqueName: \"kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.935111 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-combined-ca-bundle\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:17.993698 master-0 kubenswrapper[33867]: I0219 03:41:17.935369 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-config\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.049777 master-0 kubenswrapper[33867]: I0219 03:41:18.043295 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwzcn\" (UniqueName: \"kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.049777 master-0 kubenswrapper[33867]: I0219 03:41:18.047828 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.062225 master-0 kubenswrapper[33867]: I0219 03:41:18.053698 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.062225 master-0 kubenswrapper[33867]: I0219 03:41:18.053861 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.062225 master-0 kubenswrapper[33867]: I0219 03:41:18.058163 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.062225 master-0 kubenswrapper[33867]: I0219 03:41:18.058332 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts\") pod \"87010165-a8cc-43e1-b9b6-af44f39f0c46\" (UID: \"87010165-a8cc-43e1-b9b6-af44f39f0c46\") " Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.064249 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2q7r\" (UniqueName: \"kubernetes.io/projected/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-kube-api-access-n2q7r\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.064436 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkqw\" (UniqueName: \"kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.064766 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-combined-ca-bundle\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.064935 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-config\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.065020 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.066118 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:18.073589 master-0 kubenswrapper[33867]: I0219 03:41:18.068005 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts" (OuterVolumeSpecName: "scripts") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:18.076760 master-0 kubenswrapper[33867]: I0219 03:41:18.076724 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:18.080060 master-0 kubenswrapper[33867]: I0219 03:41:18.079930 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-config\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.080696 master-0 kubenswrapper[33867]: I0219 03:41:18.080617 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:18.095242 master-0 kubenswrapper[33867]: I0219 03:41:18.094619 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-62af-account-create-update-7qh7b"] Feb 19 03:41:18.096505 master-0 kubenswrapper[33867]: I0219 03:41:18.096456 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.115158 master-0 kubenswrapper[33867]: I0219 03:41:18.113402 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 19 03:41:18.172806 master-0 kubenswrapper[33867]: I0219 03:41:18.169410 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2q7r\" (UniqueName: \"kubernetes.io/projected/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-kube-api-access-n2q7r\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.202897 master-0 kubenswrapper[33867]: I0219 03:41:18.175002 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkqw\" (UniqueName: \"kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw\") pod \"ironic-inspector-db-create-4nkcc\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:18.202897 master-0 kubenswrapper[33867]: I0219 03:41:18.200400 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7f405f-ed33-4311-84a9-6aaf1fd4dadb-combined-ca-bundle\") pod \"ironic-neutron-agent-64cdd9cf48-dg7ws\" (UID: \"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb\") " pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.202897 master-0 kubenswrapper[33867]: I0219 03:41:18.202485 33867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87010165-a8cc-43e1-b9b6-af44f39f0c46-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:18.202897 master-0 kubenswrapper[33867]: I0219 03:41:18.202560 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:18.202897 master-0 kubenswrapper[33867]: I0219 03:41:18.202579 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:18.235284 master-0 kubenswrapper[33867]: I0219 03:41:18.224350 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-62af-account-create-update-7qh7b"] Feb 19 03:41:18.284973 master-0 kubenswrapper[33867]: I0219 03:41:18.280286 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:18.293713 master-0 kubenswrapper[33867]: I0219 03:41:18.293608 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn" (OuterVolumeSpecName: "kube-api-access-nwzcn") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "kube-api-access-nwzcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:18.307662 master-0 kubenswrapper[33867]: I0219 03:41:18.306838 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.307662 master-0 kubenswrapper[33867]: I0219 03:41:18.307053 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjndn\" (UniqueName: \"kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.307662 master-0 kubenswrapper[33867]: I0219 03:41:18.307625 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwzcn\" (UniqueName: \"kubernetes.io/projected/87010165-a8cc-43e1-b9b6-af44f39f0c46-kube-api-access-nwzcn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:18.311275 master-0 kubenswrapper[33867]: I0219 03:41:18.309875 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:18.421892 master-0 kubenswrapper[33867]: I0219 03:41:18.417292 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"87010165-a8cc-43e1-b9b6-af44f39f0c46","Type":"ContainerDied","Data":"0587704518b65e5f839a1681e4886be8e3b63fac7e2ab6b054a7f84768ea8171"} Feb 19 03:41:18.421892 master-0 kubenswrapper[33867]: I0219 03:41:18.417402 33867 scope.go:117] "RemoveContainer" containerID="5a5cf786965b4af8c0d923c5431217a2fd76231f134891dc6526a581c0b307df" Feb 19 03:41:18.421892 master-0 kubenswrapper[33867]: I0219 03:41:18.417667 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:18.445436 master-0 kubenswrapper[33867]: I0219 03:41:18.443183 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.445436 master-0 kubenswrapper[33867]: I0219 03:41:18.443320 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjndn\" (UniqueName: \"kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.445436 master-0 kubenswrapper[33867]: I0219 03:41:18.444540 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.461458 master-0 kubenswrapper[33867]: I0219 03:41:18.460114 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"cd60be62-5e2e-4bee-a46e-a202e42adad9","Type":"ContainerStarted","Data":"888a023f1217c8af2994d3eb79d7094ebd6e1f5c21ae4219daa41b7c6dd7762c"} Feb 19 03:41:18.527115 master-0 kubenswrapper[33867]: I0219 03:41:18.515447 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:41:18.527115 master-0 kubenswrapper[33867]: I0219 03:41:18.518116 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.553010 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555272 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hdk\" (UniqueName: \"kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555397 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555620 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555772 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555822 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.555979 master-0 kubenswrapper[33867]: I0219 03:41:18.555878 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.564282 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjndn\" (UniqueName: \"kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn\") pod \"ironic-inspector-62af-account-create-update-7qh7b\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.606523 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.614089 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.618774 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.618979 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.619114 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.619280 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 19 03:41:18.619585 master-0 kubenswrapper[33867]: I0219 03:41:18.619433 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 19 03:41:18.638982 master-0 kubenswrapper[33867]: I0219 03:41:18.638922 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-backup-0"] Feb 19 03:41:18.670963 master-0 kubenswrapper[33867]: I0219 03:41:18.670801 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:18.675462 master-0 kubenswrapper[33867]: I0219 03:41:18.675083 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.675462 master-0 kubenswrapper[33867]: I0219 03:41:18.675143 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.675462 master-0 kubenswrapper[33867]: I0219 03:41:18.675167 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sd4\" (UniqueName: \"kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.675462 master-0 kubenswrapper[33867]: I0219 03:41:18.675229 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.679571 master-0 kubenswrapper[33867]: I0219 03:41:18.676942 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.687408 master-0 kubenswrapper[33867]: I0219 03:41:18.687345 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687598 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687641 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687701 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687785 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687805 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687867 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.687984 master-0 kubenswrapper[33867]: I0219 03:41:18.687974 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hdk\" (UniqueName: \"kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.688346 master-0 kubenswrapper[33867]: I0219 03:41:18.688030 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.688346 master-0 kubenswrapper[33867]: I0219 03:41:18.688188 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.688921 master-0 kubenswrapper[33867]: I0219 03:41:18.688894 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.689803 master-0 kubenswrapper[33867]: I0219 03:41:18.689765 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.689856 master-0 kubenswrapper[33867]: I0219 03:41:18.689803 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.691873 master-0 kubenswrapper[33867]: I0219 03:41:18.690382 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.734117 master-0 kubenswrapper[33867]: I0219 03:41:18.734067 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hdk\" (UniqueName: \"kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk\") pod \"dnsmasq-dns-7989d45967-nbj4z\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:18.797034 master-0 kubenswrapper[33867]: I0219 03:41:18.796413 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.797034 master-0 kubenswrapper[33867]: I0219 03:41:18.796661 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.797034 master-0 kubenswrapper[33867]: I0219 03:41:18.796752 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.797034 master-0 kubenswrapper[33867]: I0219 03:41:18.796852 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.797034 master-0 kubenswrapper[33867]: I0219 03:41:18.796940 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.797512 master-0 kubenswrapper[33867]: I0219 03:41:18.797247 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.798461 master-0 kubenswrapper[33867]: I0219 03:41:18.798387 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.799303 master-0 kubenswrapper[33867]: I0219 03:41:18.798669 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sd4\" (UniqueName: \"kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.802227 master-0 kubenswrapper[33867]: I0219 03:41:18.802138 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.803830 master-0 kubenswrapper[33867]: I0219 03:41:18.803790 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.836379 master-0 kubenswrapper[33867]: I0219 03:41:18.833423 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.836379 master-0 kubenswrapper[33867]: I0219 03:41:18.833426 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.842041 master-0 kubenswrapper[33867]: I0219 03:41:18.841108 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.848479 master-0 kubenswrapper[33867]: I0219 03:41:18.848363 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.854414 master-0 kubenswrapper[33867]: I0219 03:41:18.851322 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sd4\" (UniqueName: \"kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.854414 master-0 kubenswrapper[33867]: I0219 03:41:18.852398 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom\") pod \"ironic-5bcd64b574-gx489\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:18.900130 master-0 kubenswrapper[33867]: I0219 03:41:18.899452 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:18.909396 master-0 kubenswrapper[33867]: I0219 03:41:18.908720 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:18.999620 master-0 kubenswrapper[33867]: I0219 03:41:18.997326 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data" (OuterVolumeSpecName: "config-data") pod "87010165-a8cc-43e1-b9b6-af44f39f0c46" (UID: "87010165-a8cc-43e1-b9b6-af44f39f0c46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:19.052674 master-0 kubenswrapper[33867]: I0219 03:41:19.052596 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87010165-a8cc-43e1-b9b6-af44f39f0c46-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:19.141388 master-0 kubenswrapper[33867]: I0219 03:41:19.137338 33867 scope.go:117] "RemoveContainer" containerID="99a27c5571bd7a78772f28a63d27ff56a44e8e943947da94d146e726c617c2f1" Feb 19 03:41:19.319066 master-0 kubenswrapper[33867]: I0219 03:41:19.309382 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:19.355395 master-0 kubenswrapper[33867]: I0219 03:41:19.354693 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-4nkcc"] Feb 19 03:41:19.355395 master-0 kubenswrapper[33867]: I0219 03:41:19.354832 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:19.373702 master-0 kubenswrapper[33867]: I0219 03:41:19.373561 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:19.384879 master-0 kubenswrapper[33867]: I0219 03:41:19.384419 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:19.386399 master-0 kubenswrapper[33867]: I0219 03:41:19.386120 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:19.423438 master-0 kubenswrapper[33867]: I0219 03:41:19.422142 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:19.425413 master-0 kubenswrapper[33867]: I0219 03:41:19.425338 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.429475 master-0 kubenswrapper[33867]: I0219 03:41:19.429224 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-scheduler-config-data" Feb 19 03:41:19.433154 master-0 kubenswrapper[33867]: I0219 03:41:19.433045 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:19.471708 master-0 kubenswrapper[33867]: I0219 03:41:19.471646 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-64cdd9cf48-dg7ws"] Feb 19 03:41:19.507177 master-0 kubenswrapper[33867]: I0219 03:41:19.506510 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"cd60be62-5e2e-4bee-a46e-a202e42adad9","Type":"ContainerStarted","Data":"eabc12df2a18fdef2d4e5997b6b15d34360e6062ca274a2c0ccfa56273b1afc5"} Feb 19 03:41:19.514431 master-0 kubenswrapper[33867]: I0219 03:41:19.514139 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"00b58cd8-030f-4e5f-9808-edd4e1e31d8f","Type":"ContainerStarted","Data":"281c66626c8cfe4e5bbb823ef48684b1c90191d939a29c9aece522bed54e7841"} Feb 19 03:41:19.518560 master-0 kubenswrapper[33867]: I0219 03:41:19.518499 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-4nkcc" event={"ID":"4b00abf9-7737-4850-a303-979795c4b0a3","Type":"ContainerStarted","Data":"01f2572cde3b6e2e487f26fa90f290ddf2f257529451cb461702393f6433761f"} Feb 19 03:41:19.583849 master-0 kubenswrapper[33867]: I0219 03:41:19.583673 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.583849 master-0 kubenswrapper[33867]: I0219 03:41:19.583826 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.583849 master-0 kubenswrapper[33867]: I0219 03:41:19.583856 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.584279 master-0 kubenswrapper[33867]: I0219 03:41:19.584047 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvth\" (UniqueName: \"kubernetes.io/projected/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-kube-api-access-4rvth\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.584279 master-0 kubenswrapper[33867]: I0219 03:41:19.584118 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.584365 master-0 kubenswrapper[33867]: I0219 03:41:19.584301 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712148 master-0 kubenswrapper[33867]: I0219 03:41:19.711560 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712148 master-0 kubenswrapper[33867]: I0219 03:41:19.711803 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712148 master-0 kubenswrapper[33867]: I0219 03:41:19.711949 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712148 master-0 kubenswrapper[33867]: I0219 03:41:19.712003 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712148 master-0 kubenswrapper[33867]: I0219 03:41:19.712118 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvth\" (UniqueName: \"kubernetes.io/projected/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-kube-api-access-4rvth\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.712838 master-0 kubenswrapper[33867]: I0219 03:41:19.712240 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.716434 master-0 kubenswrapper[33867]: I0219 03:41:19.716344 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-etc-machine-id\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.722534 master-0 kubenswrapper[33867]: I0219 03:41:19.722479 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data-custom\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.722710 master-0 kubenswrapper[33867]: I0219 03:41:19.722502 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-config-data\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.729035 master-0 kubenswrapper[33867]: I0219 03:41:19.728918 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-scripts\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.754372 master-0 kubenswrapper[33867]: I0219 03:41:19.744168 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvth\" (UniqueName: \"kubernetes.io/projected/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-kube-api-access-4rvth\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.776094 master-0 kubenswrapper[33867]: I0219 03:41:19.776031 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bd69f0-6b7c-44b0-8e7d-27edf886efcf-combined-ca-bundle\") pod \"cinder-054a4-scheduler-0\" (UID: \"70bd69f0-6b7c-44b0-8e7d-27edf886efcf\") " pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:19.919034 master-0 kubenswrapper[33867]: I0219 03:41:19.917572 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:20.075312 master-0 kubenswrapper[33867]: I0219 03:41:20.073490 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-62af-account-create-update-7qh7b"] Feb 19 03:41:20.388014 master-0 kubenswrapper[33867]: I0219 03:41:20.387723 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:20.493070 master-0 kubenswrapper[33867]: I0219 03:41:20.492812 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:41:20.496730 master-0 kubenswrapper[33867]: W0219 03:41:20.496644 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3018370_400e_497b_b612_0f8ac987acf7.slice/crio-b8d688f299c94b6361afdbec7c709819df7d9cd27b7f750a945b24aa310d851d WatchSource:0}: Error finding container b8d688f299c94b6361afdbec7c709819df7d9cd27b7f750a945b24aa310d851d: Status 404 returned error can't find the container with id b8d688f299c94b6361afdbec7c709819df7d9cd27b7f750a945b24aa310d851d Feb 19 03:41:20.568634 master-0 kubenswrapper[33867]: I0219 03:41:20.568165 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" event={"ID":"8580d959-3bd3-4893-8c87-9376d87cba49","Type":"ContainerStarted","Data":"6bb3d7f200872911302e6e13ff9b95b8833baf2031df6b06d608acb6ac672931"} Feb 19 03:41:20.573817 master-0 kubenswrapper[33867]: I0219 03:41:20.573485 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" event={"ID":"cd60be62-5e2e-4bee-a46e-a202e42adad9","Type":"ContainerStarted","Data":"bacf4ef3768db835d892cc8178eb6f1325aaa211596ba78c36edc5da437fc8a8"} Feb 19 03:41:20.581629 master-0 kubenswrapper[33867]: I0219 03:41:20.581556 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"00b58cd8-030f-4e5f-9808-edd4e1e31d8f","Type":"ContainerStarted","Data":"d7a89a70b055afdcea54ec12a49eabefced2b583288e77111c5718a1c2666638"} Feb 19 03:41:20.581629 master-0 kubenswrapper[33867]: I0219 03:41:20.581619 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-backup-0" event={"ID":"00b58cd8-030f-4e5f-9808-edd4e1e31d8f","Type":"ContainerStarted","Data":"35b128b44409a7cf94c9a6a29eeab9bcbef0867ce7b53c81df99fd0613809fef"} Feb 19 03:41:20.584303 master-0 kubenswrapper[33867]: I0219 03:41:20.583748 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerStarted","Data":"cd95aa7a36fa5b57bb275549a090d26f7da93b1b3d7d0fadbf3285a411a98908"} Feb 19 03:41:20.606403 master-0 kubenswrapper[33867]: I0219 03:41:20.604473 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerStarted","Data":"d36acdbb7211143d9be6418379219657803d20ecd1a2b9b5833861d1d418c8d8"} Feb 19 03:41:20.613013 master-0 kubenswrapper[33867]: I0219 03:41:20.612902 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" podStartSLOduration=4.612865564 podStartE2EDuration="4.612865564s" podCreationTimestamp="2026-02-19 03:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:20.605429353 +0000 UTC m=+1085.902099964" watchObservedRunningTime="2026-02-19 03:41:20.612865564 +0000 UTC m=+1085.909536175" Feb 19 03:41:20.616462 master-0 kubenswrapper[33867]: I0219 03:41:20.616361 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" event={"ID":"d3018370-400e-497b-b612-0f8ac987acf7","Type":"ContainerStarted","Data":"b8d688f299c94b6361afdbec7c709819df7d9cd27b7f750a945b24aa310d851d"} Feb 19 03:41:20.659979 master-0 kubenswrapper[33867]: I0219 03:41:20.659892 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-scheduler-0"] Feb 19 03:41:20.660625 master-0 kubenswrapper[33867]: I0219 03:41:20.660508 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-backup-0" podStartSLOduration=4.660481742 podStartE2EDuration="4.660481742s" podCreationTimestamp="2026-02-19 03:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:20.64203311 +0000 UTC m=+1085.938703741" watchObservedRunningTime="2026-02-19 03:41:20.660481742 +0000 UTC m=+1085.957152353" Feb 19 03:41:20.986415 master-0 kubenswrapper[33867]: I0219 03:41:20.979945 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87010165-a8cc-43e1-b9b6-af44f39f0c46" path="/var/lib/kubelet/pods/87010165-a8cc-43e1-b9b6-af44f39f0c46/volumes" Feb 19 03:41:21.563318 master-0 kubenswrapper[33867]: I0219 03:41:21.563246 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6ddb5778b6-l9w7m"] Feb 19 03:41:21.577344 master-0 kubenswrapper[33867]: I0219 03:41:21.577220 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.580495 master-0 kubenswrapper[33867]: I0219 03:41:21.580100 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6ddb5778b6-l9w7m"] Feb 19 03:41:21.582310 master-0 kubenswrapper[33867]: I0219 03:41:21.581912 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 19 03:41:21.582310 master-0 kubenswrapper[33867]: I0219 03:41:21.582178 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 19 03:41:21.677903 master-0 kubenswrapper[33867]: I0219 03:41:21.677766 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:21.689406 master-0 kubenswrapper[33867]: I0219 03:41:21.689324 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-internal-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689660 master-0 kubenswrapper[33867]: I0219 03:41:21.689441 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-custom\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689729 master-0 kubenswrapper[33867]: I0219 03:41:21.689663 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e3524599-68ae-4932-8b2f-7a5e277ad153-etc-podinfo\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689779 master-0 kubenswrapper[33867]: I0219 03:41:21.689730 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xnc\" (UniqueName: \"kubernetes.io/projected/e3524599-68ae-4932-8b2f-7a5e277ad153-kube-api-access-l5xnc\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689902 master-0 kubenswrapper[33867]: I0219 03:41:21.689875 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-scripts\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689959 master-0 kubenswrapper[33867]: I0219 03:41:21.689914 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.689959 master-0 kubenswrapper[33867]: I0219 03:41:21.689950 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-merged\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.690094 master-0 kubenswrapper[33867]: I0219 03:41:21.690070 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-public-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.694847 master-0 kubenswrapper[33867]: I0219 03:41:21.694777 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-combined-ca-bundle\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.694940 master-0 kubenswrapper[33867]: I0219 03:41:21.694921 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-logs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.720880 master-0 kubenswrapper[33867]: I0219 03:41:21.720800 33867 generic.go:334] "Generic (PLEG): container finished" podID="4b00abf9-7737-4850-a303-979795c4b0a3" containerID="67c9f952d452920777d202739f9534332a26b234c7b532a036c0f705ea898107" exitCode=0 Feb 19 03:41:21.721097 master-0 kubenswrapper[33867]: I0219 03:41:21.720910 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-4nkcc" event={"ID":"4b00abf9-7737-4850-a303-979795c4b0a3","Type":"ContainerDied","Data":"67c9f952d452920777d202739f9534332a26b234c7b532a036c0f705ea898107"} Feb 19 03:41:21.729605 master-0 kubenswrapper[33867]: I0219 03:41:21.729544 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"70bd69f0-6b7c-44b0-8e7d-27edf886efcf","Type":"ContainerStarted","Data":"54328afc5c63839711760a5b32830dc5f140a52aa54d772e9d846808c41de600"} Feb 19 03:41:21.753984 master-0 kubenswrapper[33867]: I0219 03:41:21.752581 33867 generic.go:334] "Generic (PLEG): container finished" podID="d3018370-400e-497b-b612-0f8ac987acf7" containerID="2f6e8cebffc3d8728822bedbebdd8e8be1a9e01d4a4ecc036ab8735295c61532" exitCode=0 Feb 19 03:41:21.753984 master-0 kubenswrapper[33867]: I0219 03:41:21.752756 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" event={"ID":"d3018370-400e-497b-b612-0f8ac987acf7","Type":"ContainerDied","Data":"2f6e8cebffc3d8728822bedbebdd8e8be1a9e01d4a4ecc036ab8735295c61532"} Feb 19 03:41:21.761422 master-0 kubenswrapper[33867]: I0219 03:41:21.757298 33867 generic.go:334] "Generic (PLEG): container finished" podID="8580d959-3bd3-4893-8c87-9376d87cba49" containerID="cd18f4f021a44060dd7dc69108acdb0267f11edc3f9c4b05e01d997d55d3da13" exitCode=0 Feb 19 03:41:21.761422 master-0 kubenswrapper[33867]: I0219 03:41:21.757651 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" event={"ID":"8580d959-3bd3-4893-8c87-9376d87cba49","Type":"ContainerDied","Data":"cd18f4f021a44060dd7dc69108acdb0267f11edc3f9c4b05e01d997d55d3da13"} Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798019 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-scripts\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798107 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798145 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-merged\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798223 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-public-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798341 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-combined-ca-bundle\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798384 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-logs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798452 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-internal-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798477 master-0 kubenswrapper[33867]: I0219 03:41:21.798488 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-custom\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798925 master-0 kubenswrapper[33867]: I0219 03:41:21.798592 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e3524599-68ae-4932-8b2f-7a5e277ad153-etc-podinfo\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.798925 master-0 kubenswrapper[33867]: I0219 03:41:21.798633 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xnc\" (UniqueName: \"kubernetes.io/projected/e3524599-68ae-4932-8b2f-7a5e277ad153-kube-api-access-l5xnc\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.803283 master-0 kubenswrapper[33867]: I0219 03:41:21.799711 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-logs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.803759 master-0 kubenswrapper[33867]: I0219 03:41:21.803650 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-merged\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.820161 master-0 kubenswrapper[33867]: I0219 03:41:21.812222 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-public-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.820161 master-0 kubenswrapper[33867]: I0219 03:41:21.815157 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data-custom\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.820161 master-0 kubenswrapper[33867]: I0219 03:41:21.816481 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-combined-ca-bundle\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.836780 master-0 kubenswrapper[33867]: I0219 03:41:21.836013 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-internal-tls-certs\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.850112 master-0 kubenswrapper[33867]: I0219 03:41:21.849755 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xnc\" (UniqueName: \"kubernetes.io/projected/e3524599-68ae-4932-8b2f-7a5e277ad153-kube-api-access-l5xnc\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.850112 master-0 kubenswrapper[33867]: I0219 03:41:21.849809 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e3524599-68ae-4932-8b2f-7a5e277ad153-etc-podinfo\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.850486 master-0 kubenswrapper[33867]: I0219 03:41:21.850031 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-config-data\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.866204 master-0 kubenswrapper[33867]: I0219 03:41:21.856990 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3524599-68ae-4932-8b2f-7a5e277ad153-scripts\") pod \"ironic-6ddb5778b6-l9w7m\" (UID: \"e3524599-68ae-4932-8b2f-7a5e277ad153\") " pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.905436 master-0 kubenswrapper[33867]: I0219 03:41:21.901451 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:21.916369 master-0 kubenswrapper[33867]: I0219 03:41:21.916301 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 19 03:41:21.925709 master-0 kubenswrapper[33867]: I0219 03:41:21.925654 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 19 03:41:21.930830 master-0 kubenswrapper[33867]: I0219 03:41:21.930768 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 19 03:41:21.931072 master-0 kubenswrapper[33867]: I0219 03:41:21.931042 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 19 03:41:21.989652 master-0 kubenswrapper[33867]: I0219 03:41:21.989580 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 19 03:41:22.004487 master-0 kubenswrapper[33867]: I0219 03:41:22.004310 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004487 master-0 kubenswrapper[33867]: I0219 03:41:22.004476 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004754 master-0 kubenswrapper[33867]: I0219 03:41:22.004509 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004754 master-0 kubenswrapper[33867]: I0219 03:41:22.004580 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004754 master-0 kubenswrapper[33867]: I0219 03:41:22.004659 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shz7m\" (UniqueName: \"kubernetes.io/projected/9c830f8b-3d33-4879-91b9-bd374a1e695b-kube-api-access-shz7m\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004754 master-0 kubenswrapper[33867]: I0219 03:41:22.004681 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9c830f8b-3d33-4879-91b9-bd374a1e695b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004754 master-0 kubenswrapper[33867]: I0219 03:41:22.004748 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-scripts\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.004916 master-0 kubenswrapper[33867]: I0219 03:41:22.004802 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27d9da01-597c-4972-adfa-98e947c35738\" (UniqueName: \"kubernetes.io/csi/topolvm.io^89184680-4651-4b0d-b2a9-287714691930\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.106800 master-0 kubenswrapper[33867]: I0219 03:41:22.106712 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.106800 master-0 kubenswrapper[33867]: I0219 03:41:22.106790 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.106875 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.106928 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shz7m\" (UniqueName: \"kubernetes.io/projected/9c830f8b-3d33-4879-91b9-bd374a1e695b-kube-api-access-shz7m\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.106947 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9c830f8b-3d33-4879-91b9-bd374a1e695b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.107018 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-scripts\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.107064 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-27d9da01-597c-4972-adfa-98e947c35738\" (UniqueName: \"kubernetes.io/csi/topolvm.io^89184680-4651-4b0d-b2a9-287714691930\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.107271 master-0 kubenswrapper[33867]: I0219 03:41:22.107149 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.111415 master-0 kubenswrapper[33867]: I0219 03:41:22.108591 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.112910 master-0 kubenswrapper[33867]: I0219 03:41:22.112827 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:41:22.112910 master-0 kubenswrapper[33867]: I0219 03:41:22.112880 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-27d9da01-597c-4972-adfa-98e947c35738\" (UniqueName: \"kubernetes.io/csi/topolvm.io^89184680-4651-4b0d-b2a9-287714691930\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6045f27ccf357ffa97940e2f9c39b3dc6374e4c2bf277b0a2503f2c2f45a66f5/globalmount\"" pod="openstack/ironic-conductor-0" Feb 19 03:41:22.122118 master-0 kubenswrapper[33867]: I0219 03:41:22.121695 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.132540 master-0 kubenswrapper[33867]: I0219 03:41:22.123125 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9c830f8b-3d33-4879-91b9-bd374a1e695b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.132540 master-0 kubenswrapper[33867]: I0219 03:41:22.129602 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-config-data\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.136660 master-0 kubenswrapper[33867]: I0219 03:41:22.136579 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shz7m\" (UniqueName: \"kubernetes.io/projected/9c830f8b-3d33-4879-91b9-bd374a1e695b-kube-api-access-shz7m\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.183289 master-0 kubenswrapper[33867]: I0219 03:41:22.174484 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-scripts\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.215323 master-0 kubenswrapper[33867]: I0219 03:41:22.210348 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c830f8b-3d33-4879-91b9-bd374a1e695b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:22.245277 master-0 kubenswrapper[33867]: I0219 03:41:22.239858 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:22.601742 master-0 kubenswrapper[33867]: I0219 03:41:22.601549 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6ddb5778b6-l9w7m"] Feb 19 03:41:22.775328 master-0 kubenswrapper[33867]: I0219 03:41:22.775214 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"70bd69f0-6b7c-44b0-8e7d-27edf886efcf","Type":"ContainerStarted","Data":"cfbaa299f15ca5d0ec9bac7f7d82eaf122f7902ae160c6de293a7756ab55b531"} Feb 19 03:41:23.132513 master-0 kubenswrapper[33867]: I0219 03:41:23.130891 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-854445f596-6p84s" Feb 19 03:41:23.197378 master-0 kubenswrapper[33867]: W0219 03:41:23.197075 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3524599_68ae_4932_8b2f_7a5e277ad153.slice/crio-fc1a8efbe22e4b00fd0b3c434828eea93b66bc9a9b053518edd762c150268ebe WatchSource:0}: Error finding container fc1a8efbe22e4b00fd0b3c434828eea93b66bc9a9b053518edd762c150268ebe: Status 404 returned error can't find the container with id fc1a8efbe22e4b00fd0b3c434828eea93b66bc9a9b053518edd762c150268ebe Feb 19 03:41:23.353302 master-0 kubenswrapper[33867]: I0219 03:41:23.350614 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-854445f596-6p84s" Feb 19 03:41:23.594501 master-0 kubenswrapper[33867]: I0219 03:41:23.594418 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-27d9da01-597c-4972-adfa-98e947c35738\" (UniqueName: \"kubernetes.io/csi/topolvm.io^89184680-4651-4b0d-b2a9-287714691930\") pod \"ironic-conductor-0\" (UID: \"9c830f8b-3d33-4879-91b9-bd374a1e695b\") " pod="openstack/ironic-conductor-0" Feb 19 03:41:23.894682 master-0 kubenswrapper[33867]: I0219 03:41:23.892687 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 19 03:41:23.894682 master-0 kubenswrapper[33867]: I0219 03:41:23.894239 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:23.897440 master-0 kubenswrapper[33867]: I0219 03:41:23.897383 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6ddb5778b6-l9w7m" event={"ID":"e3524599-68ae-4932-8b2f-7a5e277ad153","Type":"ContainerStarted","Data":"fc1a8efbe22e4b00fd0b3c434828eea93b66bc9a9b053518edd762c150268ebe"} Feb 19 03:41:23.903728 master-0 kubenswrapper[33867]: I0219 03:41:23.903674 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-659db66d4-26vz9"] Feb 19 03:41:23.932367 master-0 kubenswrapper[33867]: I0219 03:41:23.928766 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.963288 master-0 kubenswrapper[33867]: I0219 03:41:23.963022 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659db66d4-26vz9"] Feb 19 03:41:23.973297 master-0 kubenswrapper[33867]: I0219 03:41:23.968474 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" podStartSLOduration=5.968442899 podStartE2EDuration="5.968442899s" podCreationTimestamp="2026-02-19 03:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:23.925297237 +0000 UTC m=+1089.221967848" watchObservedRunningTime="2026-02-19 03:41:23.968442899 +0000 UTC m=+1089.265113510" Feb 19 03:41:23.990013 master-0 kubenswrapper[33867]: I0219 03:41:23.989689 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-config-data\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.990013 master-0 kubenswrapper[33867]: I0219 03:41:23.989781 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef247635-c161-4402-b9f0-6b9e4e9bc42b-logs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.990013 master-0 kubenswrapper[33867]: I0219 03:41:23.989803 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-scripts\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.990013 master-0 kubenswrapper[33867]: I0219 03:41:23.989824 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-public-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.994295 master-0 kubenswrapper[33867]: I0219 03:41:23.990742 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-combined-ca-bundle\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.994295 master-0 kubenswrapper[33867]: I0219 03:41:23.991126 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8kkw\" (UniqueName: \"kubernetes.io/projected/ef247635-c161-4402-b9f0-6b9e4e9bc42b-kube-api-access-n8kkw\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:23.994295 master-0 kubenswrapper[33867]: I0219 03:41:23.991274 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-internal-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.092828 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-config-data\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.092907 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef247635-c161-4402-b9f0-6b9e4e9bc42b-logs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.092933 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-scripts\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.093448 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef247635-c161-4402-b9f0-6b9e4e9bc42b-logs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.093522 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-public-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.093596 master-0 kubenswrapper[33867]: I0219 03:41:24.093568 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-combined-ca-bundle\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.094157 master-0 kubenswrapper[33867]: I0219 03:41:24.093661 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8kkw\" (UniqueName: \"kubernetes.io/projected/ef247635-c161-4402-b9f0-6b9e4e9bc42b-kube-api-access-n8kkw\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.094157 master-0 kubenswrapper[33867]: I0219 03:41:24.093716 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-internal-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.108657 master-0 kubenswrapper[33867]: I0219 03:41:24.108567 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-scripts\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.109196 master-0 kubenswrapper[33867]: I0219 03:41:24.109139 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-combined-ca-bundle\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.112778 master-0 kubenswrapper[33867]: I0219 03:41:24.110212 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-config-data\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.114552 master-0 kubenswrapper[33867]: I0219 03:41:24.114491 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-internal-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.120145 master-0 kubenswrapper[33867]: I0219 03:41:24.120073 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef247635-c161-4402-b9f0-6b9e4e9bc42b-public-tls-certs\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.120395 master-0 kubenswrapper[33867]: I0219 03:41:24.120104 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8kkw\" (UniqueName: \"kubernetes.io/projected/ef247635-c161-4402-b9f0-6b9e4e9bc42b-kube-api-access-n8kkw\") pod \"placement-659db66d4-26vz9\" (UID: \"ef247635-c161-4402-b9f0-6b9e4e9bc42b\") " pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.299378 master-0 kubenswrapper[33867]: I0219 03:41:24.295243 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:24.823571 master-0 kubenswrapper[33867]: I0219 03:41:24.821649 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:24.843673 master-0 kubenswrapper[33867]: I0219 03:41:24.843581 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:24.948294 master-0 kubenswrapper[33867]: I0219 03:41:24.948192 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts\") pod \"4b00abf9-7737-4850-a303-979795c4b0a3\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " Feb 19 03:41:24.948294 master-0 kubenswrapper[33867]: I0219 03:41:24.948298 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts\") pod \"8580d959-3bd3-4893-8c87-9376d87cba49\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.948566 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqkqw\" (UniqueName: \"kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw\") pod \"4b00abf9-7737-4850-a303-979795c4b0a3\" (UID: \"4b00abf9-7737-4850-a303-979795c4b0a3\") " Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.948764 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjndn\" (UniqueName: \"kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn\") pod \"8580d959-3bd3-4893-8c87-9376d87cba49\" (UID: \"8580d959-3bd3-4893-8c87-9376d87cba49\") " Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.948820 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8580d959-3bd3-4893-8c87-9376d87cba49" (UID: "8580d959-3bd3-4893-8c87-9376d87cba49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.949313 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b00abf9-7737-4850-a303-979795c4b0a3" (UID: "4b00abf9-7737-4850-a303-979795c4b0a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.950561 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b00abf9-7737-4850-a303-979795c4b0a3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:24.952703 master-0 kubenswrapper[33867]: I0219 03:41:24.950590 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580d959-3bd3-4893-8c87-9376d87cba49-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:24.955023 master-0 kubenswrapper[33867]: I0219 03:41:24.954735 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn" (OuterVolumeSpecName: "kube-api-access-hjndn") pod "8580d959-3bd3-4893-8c87-9376d87cba49" (UID: "8580d959-3bd3-4893-8c87-9376d87cba49"). InnerVolumeSpecName "kube-api-access-hjndn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:24.958696 master-0 kubenswrapper[33867]: I0219 03:41:24.958634 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw" (OuterVolumeSpecName: "kube-api-access-nqkqw") pod "4b00abf9-7737-4850-a303-979795c4b0a3" (UID: "4b00abf9-7737-4850-a303-979795c4b0a3"). InnerVolumeSpecName "kube-api-access-nqkqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:25.034752 master-0 kubenswrapper[33867]: I0219 03:41:25.034652 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" event={"ID":"8580d959-3bd3-4893-8c87-9376d87cba49","Type":"ContainerDied","Data":"6bb3d7f200872911302e6e13ff9b95b8833baf2031df6b06d608acb6ac672931"} Feb 19 03:41:25.034752 master-0 kubenswrapper[33867]: I0219 03:41:25.034740 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb3d7f200872911302e6e13ff9b95b8833baf2031df6b06d608acb6ac672931" Feb 19 03:41:25.035045 master-0 kubenswrapper[33867]: I0219 03:41:25.034798 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-62af-account-create-update-7qh7b" Feb 19 03:41:25.041514 master-0 kubenswrapper[33867]: I0219 03:41:25.041448 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-4nkcc" event={"ID":"4b00abf9-7737-4850-a303-979795c4b0a3","Type":"ContainerDied","Data":"01f2572cde3b6e2e487f26fa90f290ddf2f257529451cb461702393f6433761f"} Feb 19 03:41:25.041763 master-0 kubenswrapper[33867]: I0219 03:41:25.041549 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f2572cde3b6e2e487f26fa90f290ddf2f257529451cb461702393f6433761f" Feb 19 03:41:25.041763 master-0 kubenswrapper[33867]: I0219 03:41:25.041652 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-4nkcc" Feb 19 03:41:25.046536 master-0 kubenswrapper[33867]: I0219 03:41:25.046222 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" event={"ID":"d3018370-400e-497b-b612-0f8ac987acf7","Type":"ContainerStarted","Data":"d01cd5765bd85ccd47fb141ec184364829833444bed578b10a95e6370705d5cd"} Feb 19 03:41:25.053740 master-0 kubenswrapper[33867]: I0219 03:41:25.053686 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjndn\" (UniqueName: \"kubernetes.io/projected/8580d959-3bd3-4893-8c87-9376d87cba49-kube-api-access-hjndn\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:25.053740 master-0 kubenswrapper[33867]: I0219 03:41:25.053725 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqkqw\" (UniqueName: \"kubernetes.io/projected/4b00abf9-7737-4850-a303-979795c4b0a3-kube-api-access-nqkqw\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:25.437524 master-0 kubenswrapper[33867]: I0219 03:41:25.437361 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 19 03:41:25.470022 master-0 kubenswrapper[33867]: W0219 03:41:25.469529 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c830f8b_3d33_4879_91b9_bd374a1e695b.slice/crio-af5735b1b13b48c37043cc06b51dee7babf959d5aaadff8782fab63de89c13f6 WatchSource:0}: Error finding container af5735b1b13b48c37043cc06b51dee7babf959d5aaadff8782fab63de89c13f6: Status 404 returned error can't find the container with id af5735b1b13b48c37043cc06b51dee7babf959d5aaadff8782fab63de89c13f6 Feb 19 03:41:25.485037 master-0 kubenswrapper[33867]: I0219 03:41:25.484947 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659db66d4-26vz9"] Feb 19 03:41:26.072376 master-0 kubenswrapper[33867]: I0219 03:41:26.072306 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerStarted","Data":"925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4"} Feb 19 03:41:26.073160 master-0 kubenswrapper[33867]: I0219 03:41:26.073141 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:26.083476 master-0 kubenswrapper[33867]: I0219 03:41:26.083386 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-scheduler-0" event={"ID":"70bd69f0-6b7c-44b0-8e7d-27edf886efcf","Type":"ContainerStarted","Data":"83b4e0d17f2d05ce23b39a530f57ca4b5a1f5bd895b45868ba54acdb812b5e95"} Feb 19 03:41:26.087905 master-0 kubenswrapper[33867]: I0219 03:41:26.087818 33867 generic.go:334] "Generic (PLEG): container finished" podID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerID="31301b894042c9b632ac03692aef577af55d2be8fc6af19977e6b550e95eedeb" exitCode=0 Feb 19 03:41:26.088132 master-0 kubenswrapper[33867]: I0219 03:41:26.088017 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerDied","Data":"31301b894042c9b632ac03692aef577af55d2be8fc6af19977e6b550e95eedeb"} Feb 19 03:41:26.123110 master-0 kubenswrapper[33867]: I0219 03:41:26.122066 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" podStartSLOduration=3.909548088 podStartE2EDuration="9.122038757s" podCreationTimestamp="2026-02-19 03:41:17 +0000 UTC" firstStartedPulling="2026-02-19 03:41:19.51482232 +0000 UTC m=+1084.811492931" lastFinishedPulling="2026-02-19 03:41:24.727312989 +0000 UTC m=+1090.023983600" observedRunningTime="2026-02-19 03:41:26.10275064 +0000 UTC m=+1091.399421271" watchObservedRunningTime="2026-02-19 03:41:26.122038757 +0000 UTC m=+1091.418709388" Feb 19 03:41:26.130726 master-0 kubenswrapper[33867]: I0219 03:41:26.130652 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6ddb5778b6-l9w7m" event={"ID":"e3524599-68ae-4932-8b2f-7a5e277ad153","Type":"ContainerDied","Data":"9b34b1d2b7db0738e45c1bd54dee89b930eba5e1e82f51b72bf3ce887f35890b"} Feb 19 03:41:26.130811 master-0 kubenswrapper[33867]: I0219 03:41:26.130690 33867 generic.go:334] "Generic (PLEG): container finished" podID="e3524599-68ae-4932-8b2f-7a5e277ad153" containerID="9b34b1d2b7db0738e45c1bd54dee89b930eba5e1e82f51b72bf3ce887f35890b" exitCode=0 Feb 19 03:41:26.173396 master-0 kubenswrapper[33867]: I0219 03:41:26.173313 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"511e776352ffc1a57175547bfddd83f32b62b7b20d09437a9c680518c07ff545"} Feb 19 03:41:26.173616 master-0 kubenswrapper[33867]: I0219 03:41:26.173592 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"af5735b1b13b48c37043cc06b51dee7babf959d5aaadff8782fab63de89c13f6"} Feb 19 03:41:26.273754 master-0 kubenswrapper[33867]: I0219 03:41:26.273704 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659db66d4-26vz9" event={"ID":"ef247635-c161-4402-b9f0-6b9e4e9bc42b","Type":"ContainerStarted","Data":"350c721a0b331a338f394cb9bde87f4c239ea3b4cac1266d4453cf56d92be9a1"} Feb 19 03:41:26.274021 master-0 kubenswrapper[33867]: I0219 03:41:26.274009 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659db66d4-26vz9" event={"ID":"ef247635-c161-4402-b9f0-6b9e4e9bc42b","Type":"ContainerStarted","Data":"c8264b4b85160dda0781df1bee8e7a74c606410110fa8619096feb6efc039ef5"} Feb 19 03:41:26.274659 master-0 kubenswrapper[33867]: I0219 03:41:26.274143 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-scheduler-0" podStartSLOduration=7.274114703 podStartE2EDuration="7.274114703s" podCreationTimestamp="2026-02-19 03:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:26.249148676 +0000 UTC m=+1091.545819287" watchObservedRunningTime="2026-02-19 03:41:26.274114703 +0000 UTC m=+1091.570785314" Feb 19 03:41:26.975020 master-0 kubenswrapper[33867]: I0219 03:41:26.974899 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-volume-lvm-iscsi-0" Feb 19 03:41:27.290546 master-0 kubenswrapper[33867]: I0219 03:41:27.290367 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6ddb5778b6-l9w7m" event={"ID":"e3524599-68ae-4932-8b2f-7a5e277ad153","Type":"ContainerStarted","Data":"099f5371fbb9867902150f1165d3dd762ac1c7e3bb39f10b801b53db3bbe1900"} Feb 19 03:41:27.290546 master-0 kubenswrapper[33867]: I0219 03:41:27.290442 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6ddb5778b6-l9w7m" event={"ID":"e3524599-68ae-4932-8b2f-7a5e277ad153","Type":"ContainerStarted","Data":"aa2707e601538f77bf7d01bc1b7d19e4e59795d0ee60d176258762c925c4d20f"} Feb 19 03:41:27.291683 master-0 kubenswrapper[33867]: I0219 03:41:27.291620 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:27.296504 master-0 kubenswrapper[33867]: I0219 03:41:27.295667 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659db66d4-26vz9" event={"ID":"ef247635-c161-4402-b9f0-6b9e4e9bc42b","Type":"ContainerStarted","Data":"853d12102c2affb54254faf61a08f3a021d9e115aee6247e08cfa1e237e193ee"} Feb 19 03:41:27.296504 master-0 kubenswrapper[33867]: I0219 03:41:27.296412 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:27.296504 master-0 kubenswrapper[33867]: I0219 03:41:27.296440 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:27.303241 master-0 kubenswrapper[33867]: I0219 03:41:27.303193 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerStarted","Data":"d0c8db38ddd29dd5802a653fac94ceff5dbcb6e5f1ada15bba31d197c9d453b9"} Feb 19 03:41:27.303343 master-0 kubenswrapper[33867]: I0219 03:41:27.303269 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerStarted","Data":"a147e3b9cd363931f92adee6f18cb36c5a2776e443f67c6f9bcf0199cef58205"} Feb 19 03:41:27.303774 master-0 kubenswrapper[33867]: I0219 03:41:27.303717 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:27.358612 master-0 kubenswrapper[33867]: I0219 03:41:27.358439 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6ddb5778b6-l9w7m" podStartSLOduration=4.754037855 podStartE2EDuration="6.358411058s" podCreationTimestamp="2026-02-19 03:41:21 +0000 UTC" firstStartedPulling="2026-02-19 03:41:23.208691845 +0000 UTC m=+1088.505362456" lastFinishedPulling="2026-02-19 03:41:24.813065048 +0000 UTC m=+1090.109735659" observedRunningTime="2026-02-19 03:41:27.320720391 +0000 UTC m=+1092.617391002" watchObservedRunningTime="2026-02-19 03:41:27.358411058 +0000 UTC m=+1092.655081669" Feb 19 03:41:27.411280 master-0 kubenswrapper[33867]: I0219 03:41:27.403487 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5bcd64b574-gx489" podStartSLOduration=5.068536005 podStartE2EDuration="9.403465234s" podCreationTimestamp="2026-02-19 03:41:18 +0000 UTC" firstStartedPulling="2026-02-19 03:41:20.395526979 +0000 UTC m=+1085.692197590" lastFinishedPulling="2026-02-19 03:41:24.730456208 +0000 UTC m=+1090.027126819" observedRunningTime="2026-02-19 03:41:27.348977641 +0000 UTC m=+1092.645648252" watchObservedRunningTime="2026-02-19 03:41:27.403465234 +0000 UTC m=+1092.700135845" Feb 19 03:41:27.422280 master-0 kubenswrapper[33867]: I0219 03:41:27.417889 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-659db66d4-26vz9" podStartSLOduration=4.417869582 podStartE2EDuration="4.417869582s" podCreationTimestamp="2026-02-19 03:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:27.388708946 +0000 UTC m=+1092.685379557" watchObservedRunningTime="2026-02-19 03:41:27.417869582 +0000 UTC m=+1092.714540193" Feb 19 03:41:27.540404 master-0 kubenswrapper[33867]: I0219 03:41:27.540280 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-backup-0" Feb 19 03:41:28.311132 master-0 kubenswrapper[33867]: E0219 03:41:28.311038 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.311535 master-0 kubenswrapper[33867]: E0219 03:41:28.310996 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.311657 master-0 kubenswrapper[33867]: E0219 03:41:28.311587 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.311742 master-0 kubenswrapper[33867]: E0219 03:41:28.311716 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.311990 master-0 kubenswrapper[33867]: E0219 03:41:28.311954 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.312049 master-0 kubenswrapper[33867]: E0219 03:41:28.311984 33867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" podUID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" containerName="ironic-neutron-agent" Feb 19 03:41:28.312049 master-0 kubenswrapper[33867]: E0219 03:41:28.312034 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" cmd=["/bin/true"] Feb 19 03:41:28.312146 master-0 kubenswrapper[33867]: E0219 03:41:28.312048 33867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" podUID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" containerName="ironic-neutron-agent" Feb 19 03:41:28.326570 master-0 kubenswrapper[33867]: I0219 03:41:28.326335 33867 generic.go:334] "Generic (PLEG): container finished" podID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerID="d0c8db38ddd29dd5802a653fac94ceff5dbcb6e5f1ada15bba31d197c9d453b9" exitCode=1 Feb 19 03:41:28.326817 master-0 kubenswrapper[33867]: I0219 03:41:28.326688 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerDied","Data":"d0c8db38ddd29dd5802a653fac94ceff5dbcb6e5f1ada15bba31d197c9d453b9"} Feb 19 03:41:28.327658 master-0 kubenswrapper[33867]: I0219 03:41:28.327625 33867 scope.go:117] "RemoveContainer" containerID="d0c8db38ddd29dd5802a653fac94ceff5dbcb6e5f1ada15bba31d197c9d453b9" Feb 19 03:41:29.344653 master-0 kubenswrapper[33867]: I0219 03:41:29.343933 33867 generic.go:334] "Generic (PLEG): container finished" podID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerID="de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7" exitCode=1 Feb 19 03:41:29.344653 master-0 kubenswrapper[33867]: I0219 03:41:29.344029 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerDied","Data":"de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7"} Feb 19 03:41:29.344653 master-0 kubenswrapper[33867]: I0219 03:41:29.344078 33867 scope.go:117] "RemoveContainer" containerID="d0c8db38ddd29dd5802a653fac94ceff5dbcb6e5f1ada15bba31d197c9d453b9" Feb 19 03:41:29.345114 master-0 kubenswrapper[33867]: I0219 03:41:29.345070 33867 scope.go:117] "RemoveContainer" containerID="de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7" Feb 19 03:41:29.345468 master-0 kubenswrapper[33867]: E0219 03:41:29.345435 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5bcd64b574-gx489_openstack(48632170-8e01-4f9e-8ade-2662bfb392b2)\"" pod="openstack/ironic-5bcd64b574-gx489" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" Feb 19 03:41:29.353174 master-0 kubenswrapper[33867]: I0219 03:41:29.353101 33867 generic.go:334] "Generic (PLEG): container finished" podID="9c830f8b-3d33-4879-91b9-bd374a1e695b" containerID="511e776352ffc1a57175547bfddd83f32b62b7b20d09437a9c680518c07ff545" exitCode=0 Feb 19 03:41:29.353890 master-0 kubenswrapper[33867]: I0219 03:41:29.353286 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerDied","Data":"511e776352ffc1a57175547bfddd83f32b62b7b20d09437a9c680518c07ff545"} Feb 19 03:41:29.356368 master-0 kubenswrapper[33867]: I0219 03:41:29.356320 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:41:29.356981 master-0 kubenswrapper[33867]: I0219 03:41:29.356944 33867 generic.go:334] "Generic (PLEG): container finished" podID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" exitCode=1 Feb 19 03:41:29.357216 master-0 kubenswrapper[33867]: I0219 03:41:29.357163 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerDied","Data":"925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4"} Feb 19 03:41:29.358296 master-0 kubenswrapper[33867]: I0219 03:41:29.358232 33867 scope.go:117] "RemoveContainer" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" Feb 19 03:41:29.387362 master-0 kubenswrapper[33867]: I0219 03:41:29.386776 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:29.387816 master-0 kubenswrapper[33867]: I0219 03:41:29.387761 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:29.556459 master-0 kubenswrapper[33867]: I0219 03:41:29.550356 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:29.556459 master-0 kubenswrapper[33867]: I0219 03:41:29.550615 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="dnsmasq-dns" containerID="cri-o://bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5" gracePeriod=10 Feb 19 03:41:29.920789 master-0 kubenswrapper[33867]: I0219 03:41:29.919060 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:30.242016 master-0 kubenswrapper[33867]: I0219 03:41:30.240809 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-054a4-scheduler-0" Feb 19 03:41:30.317290 master-0 kubenswrapper[33867]: I0219 03:41:30.317213 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:30.333352 master-0 kubenswrapper[33867]: I0219 03:41:30.333278 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.333600 master-0 kubenswrapper[33867]: I0219 03:41:30.333436 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6j85\" (UniqueName: \"kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.333600 master-0 kubenswrapper[33867]: I0219 03:41:30.333473 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.333600 master-0 kubenswrapper[33867]: I0219 03:41:30.333495 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.333721 master-0 kubenswrapper[33867]: I0219 03:41:30.333704 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.333800 master-0 kubenswrapper[33867]: I0219 03:41:30.333778 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc\") pod \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\" (UID: \"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84\") " Feb 19 03:41:30.386647 master-0 kubenswrapper[33867]: I0219 03:41:30.386567 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85" (OuterVolumeSpecName: "kube-api-access-t6j85") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "kube-api-access-t6j85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:30.409341 master-0 kubenswrapper[33867]: I0219 03:41:30.409188 33867 scope.go:117] "RemoveContainer" containerID="de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7" Feb 19 03:41:30.410862 master-0 kubenswrapper[33867]: E0219 03:41:30.409824 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5bcd64b574-gx489_openstack(48632170-8e01-4f9e-8ade-2662bfb392b2)\"" pod="openstack/ironic-5bcd64b574-gx489" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" Feb 19 03:41:30.421445 master-0 kubenswrapper[33867]: I0219 03:41:30.417442 33867 generic.go:334] "Generic (PLEG): container finished" podID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerID="bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5" exitCode=0 Feb 19 03:41:30.421445 master-0 kubenswrapper[33867]: I0219 03:41:30.417673 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" event={"ID":"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84","Type":"ContainerDied","Data":"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5"} Feb 19 03:41:30.421445 master-0 kubenswrapper[33867]: I0219 03:41:30.417715 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" event={"ID":"3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84","Type":"ContainerDied","Data":"010e0e66bee44ab3f4353950152ea88e9a7b83d09d2a055e8983335b4dfc6d79"} Feb 19 03:41:30.421445 master-0 kubenswrapper[33867]: I0219 03:41:30.417743 33867 scope.go:117] "RemoveContainer" containerID="bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5" Feb 19 03:41:30.421445 master-0 kubenswrapper[33867]: I0219 03:41:30.417920 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" Feb 19 03:41:30.442249 master-0 kubenswrapper[33867]: I0219 03:41:30.431413 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerStarted","Data":"30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5"} Feb 19 03:41:30.442249 master-0 kubenswrapper[33867]: I0219 03:41:30.432716 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:30.442249 master-0 kubenswrapper[33867]: I0219 03:41:30.439980 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6j85\" (UniqueName: \"kubernetes.io/projected/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-kube-api-access-t6j85\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.497127 master-0 kubenswrapper[33867]: I0219 03:41:30.496991 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:30.497127 master-0 kubenswrapper[33867]: I0219 03:41:30.497010 33867 scope.go:117] "RemoveContainer" containerID="0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a" Feb 19 03:41:30.497435 master-0 kubenswrapper[33867]: I0219 03:41:30.497178 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config" (OuterVolumeSpecName: "config") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:30.497819 master-0 kubenswrapper[33867]: I0219 03:41:30.497774 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:30.497916 master-0 kubenswrapper[33867]: I0219 03:41:30.497846 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:30.498119 master-0 kubenswrapper[33867]: I0219 03:41:30.498072 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" (UID: "3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:30.545477 master-0 kubenswrapper[33867]: I0219 03:41:30.542722 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.545477 master-0 kubenswrapper[33867]: I0219 03:41:30.542786 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.545477 master-0 kubenswrapper[33867]: I0219 03:41:30.542800 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.545477 master-0 kubenswrapper[33867]: I0219 03:41:30.542814 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.545477 master-0 kubenswrapper[33867]: I0219 03:41:30.542829 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:30.579361 master-0 kubenswrapper[33867]: I0219 03:41:30.579327 33867 scope.go:117] "RemoveContainer" containerID="bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5" Feb 19 03:41:30.581490 master-0 kubenswrapper[33867]: E0219 03:41:30.581452 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5\": container with ID starting with bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5 not found: ID does not exist" containerID="bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5" Feb 19 03:41:30.581566 master-0 kubenswrapper[33867]: I0219 03:41:30.581494 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5"} err="failed to get container status \"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5\": rpc error: code = NotFound desc = could not find container \"bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5\": container with ID starting with bfbbf53d8c608a2f69d78c2a0f695263a35a87bca1c78c0b22bcf2e9fe3b6ed5 not found: ID does not exist" Feb 19 03:41:30.581566 master-0 kubenswrapper[33867]: I0219 03:41:30.581518 33867 scope.go:117] "RemoveContainer" containerID="0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a" Feb 19 03:41:30.581742 master-0 kubenswrapper[33867]: E0219 03:41:30.581714 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a\": container with ID starting with 0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a not found: ID does not exist" containerID="0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a" Feb 19 03:41:30.581829 master-0 kubenswrapper[33867]: I0219 03:41:30.581738 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a"} err="failed to get container status \"0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a\": rpc error: code = NotFound desc = could not find container \"0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a\": container with ID starting with 0839c567c7e8c03768a3a28f9b1a9866b92bb3a5396ee03d976389bd71819b2a not found: ID does not exist" Feb 19 03:41:30.774118 master-0 kubenswrapper[33867]: I0219 03:41:30.774054 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:30.789637 master-0 kubenswrapper[33867]: I0219 03:41:30.789547 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8f98b7745-89hd2"] Feb 19 03:41:30.923771 master-0 kubenswrapper[33867]: I0219 03:41:30.923613 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-858d748b68-dmpbz" Feb 19 03:41:31.075032 master-0 kubenswrapper[33867]: I0219 03:41:31.073857 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" path="/var/lib/kubelet/pods/3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84/volumes" Feb 19 03:41:31.458732 master-0 kubenswrapper[33867]: I0219 03:41:31.458675 33867 scope.go:117] "RemoveContainer" containerID="de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7" Feb 19 03:41:31.475055 master-0 kubenswrapper[33867]: E0219 03:41:31.459093 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5bcd64b574-gx489_openstack(48632170-8e01-4f9e-8ade-2662bfb392b2)\"" pod="openstack/ironic-5bcd64b574-gx489" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" Feb 19 03:41:33.036707 master-0 kubenswrapper[33867]: I0219 03:41:33.036548 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-nrrkp"] Feb 19 03:41:33.037506 master-0 kubenswrapper[33867]: E0219 03:41:33.037470 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8580d959-3bd3-4893-8c87-9376d87cba49" containerName="mariadb-account-create-update" Feb 19 03:41:33.037506 master-0 kubenswrapper[33867]: I0219 03:41:33.037503 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8580d959-3bd3-4893-8c87-9376d87cba49" containerName="mariadb-account-create-update" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: E0219 03:41:33.037527 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="init" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: I0219 03:41:33.037535 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="init" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: E0219 03:41:33.037583 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="dnsmasq-dns" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: I0219 03:41:33.037595 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="dnsmasq-dns" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: E0219 03:41:33.037620 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b00abf9-7737-4850-a303-979795c4b0a3" containerName="mariadb-database-create" Feb 19 03:41:33.037636 master-0 kubenswrapper[33867]: I0219 03:41:33.037627 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b00abf9-7737-4850-a303-979795c4b0a3" containerName="mariadb-database-create" Feb 19 03:41:33.038016 master-0 kubenswrapper[33867]: I0219 03:41:33.037958 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="dnsmasq-dns" Feb 19 03:41:33.038080 master-0 kubenswrapper[33867]: I0219 03:41:33.038025 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8580d959-3bd3-4893-8c87-9376d87cba49" containerName="mariadb-account-create-update" Feb 19 03:41:33.038080 master-0 kubenswrapper[33867]: I0219 03:41:33.038046 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b00abf9-7737-4850-a303-979795c4b0a3" containerName="mariadb-database-create" Feb 19 03:41:33.041077 master-0 kubenswrapper[33867]: I0219 03:41:33.041033 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.044507 master-0 kubenswrapper[33867]: I0219 03:41:33.044442 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 19 03:41:33.044875 master-0 kubenswrapper[33867]: I0219 03:41:33.044842 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 19 03:41:33.053179 master-0 kubenswrapper[33867]: I0219 03:41:33.053001 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-nrrkp"] Feb 19 03:41:33.124876 master-0 kubenswrapper[33867]: I0219 03:41:33.124790 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125237 master-0 kubenswrapper[33867]: I0219 03:41:33.124868 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125237 master-0 kubenswrapper[33867]: I0219 03:41:33.125054 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125237 master-0 kubenswrapper[33867]: I0219 03:41:33.125131 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125237 master-0 kubenswrapper[33867]: I0219 03:41:33.125164 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccdg\" (UniqueName: \"kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125457 master-0 kubenswrapper[33867]: I0219 03:41:33.125282 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.125457 master-0 kubenswrapper[33867]: I0219 03:41:33.125330 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.229705 master-0 kubenswrapper[33867]: I0219 03:41:33.227339 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccdg\" (UniqueName: \"kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.229705 master-0 kubenswrapper[33867]: I0219 03:41:33.227480 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.229705 master-0 kubenswrapper[33867]: I0219 03:41:33.227820 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.232472 master-0 kubenswrapper[33867]: I0219 03:41:33.230318 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.232472 master-0 kubenswrapper[33867]: I0219 03:41:33.230368 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.232472 master-0 kubenswrapper[33867]: I0219 03:41:33.230564 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.232472 master-0 kubenswrapper[33867]: I0219 03:41:33.231632 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.232732 master-0 kubenswrapper[33867]: I0219 03:41:33.232669 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.235432 master-0 kubenswrapper[33867]: I0219 03:41:33.235293 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.241323 master-0 kubenswrapper[33867]: I0219 03:41:33.240071 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.241323 master-0 kubenswrapper[33867]: I0219 03:41:33.240649 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.241323 master-0 kubenswrapper[33867]: I0219 03:41:33.240959 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.241893 master-0 kubenswrapper[33867]: I0219 03:41:33.241773 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.252645 master-0 kubenswrapper[33867]: I0219 03:41:33.252571 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccdg\" (UniqueName: \"kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg\") pod \"ironic-inspector-db-sync-nrrkp\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.356529 master-0 kubenswrapper[33867]: I0219 03:41:33.356234 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:33.396354 master-0 kubenswrapper[33867]: I0219 03:41:33.382948 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:33.676103 master-0 kubenswrapper[33867]: I0219 03:41:33.675812 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-6ddb5778b6-l9w7m" Feb 19 03:41:33.806852 master-0 kubenswrapper[33867]: I0219 03:41:33.804465 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:33.806852 master-0 kubenswrapper[33867]: I0219 03:41:33.805014 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-5bcd64b574-gx489" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api-log" containerID="cri-o://a147e3b9cd363931f92adee6f18cb36c5a2776e443f67c6f9bcf0199cef58205" gracePeriod=60 Feb 19 03:41:34.393069 master-0 kubenswrapper[33867]: I0219 03:41:34.391603 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.398127 master-0 kubenswrapper[33867]: I0219 03:41:34.397999 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:34.414091 master-0 kubenswrapper[33867]: I0219 03:41:34.401077 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.414091 master-0 kubenswrapper[33867]: I0219 03:41:34.401622 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 03:41:34.414091 master-0 kubenswrapper[33867]: I0219 03:41:34.403920 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 03:41:34.530248 master-0 kubenswrapper[33867]: I0219 03:41:34.529885 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.530572 master-0 kubenswrapper[33867]: I0219 03:41:34.530282 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj75g\" (UniqueName: \"kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.531659 master-0 kubenswrapper[33867]: I0219 03:41:34.531615 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.532170 master-0 kubenswrapper[33867]: I0219 03:41:34.532106 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.535691 master-0 kubenswrapper[33867]: I0219 03:41:34.535609 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.537390 master-0 kubenswrapper[33867]: E0219 03:41:34.536809 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-rj75g openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" Feb 19 03:41:34.551533 master-0 kubenswrapper[33867]: I0219 03:41:34.551105 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.593307 master-0 kubenswrapper[33867]: I0219 03:41:34.593217 33867 generic.go:334] "Generic (PLEG): container finished" podID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerID="a147e3b9cd363931f92adee6f18cb36c5a2776e443f67c6f9bcf0199cef58205" exitCode=143 Feb 19 03:41:34.593307 master-0 kubenswrapper[33867]: I0219 03:41:34.593297 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerDied","Data":"a147e3b9cd363931f92adee6f18cb36c5a2776e443f67c6f9bcf0199cef58205"} Feb 19 03:41:34.650525 master-0 kubenswrapper[33867]: I0219 03:41:34.649627 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.651892 master-0 kubenswrapper[33867]: I0219 03:41:34.650753 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj75g\" (UniqueName: \"kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.651892 master-0 kubenswrapper[33867]: I0219 03:41:34.650980 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.651892 master-0 kubenswrapper[33867]: I0219 03:41:34.651307 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.656169 master-0 kubenswrapper[33867]: I0219 03:41:34.652705 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.658812 master-0 kubenswrapper[33867]: E0219 03:41:34.656679 33867 projected.go:194] Error preparing data for projected volume kube-api-access-rj75g for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1) does not match the UID in record. The object might have been deleted and then recreated Feb 19 03:41:34.658812 master-0 kubenswrapper[33867]: E0219 03:41:34.656795 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g podName:577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1 nodeName:}" failed. No retries permitted until 2026-02-19 03:41:35.156764768 +0000 UTC m=+1100.453435379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rj75g" (UniqueName: "kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g") pod "openstackclient" (UID: "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1) does not match the UID in record. The object might have been deleted and then recreated Feb 19 03:41:34.658812 master-0 kubenswrapper[33867]: I0219 03:41:34.657987 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.662127 master-0 kubenswrapper[33867]: I0219 03:41:34.661755 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.663687 master-0 kubenswrapper[33867]: I0219 03:41:34.663477 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:34.685232 master-0 kubenswrapper[33867]: I0219 03:41:34.685137 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:34.697405 master-0 kubenswrapper[33867]: I0219 03:41:34.697317 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:34.754572 master-0 kubenswrapper[33867]: I0219 03:41:34.754499 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-nrrkp"] Feb 19 03:41:34.820250 master-0 kubenswrapper[33867]: I0219 03:41:34.820181 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:34.862619 master-0 kubenswrapper[33867]: I0219 03:41:34.862331 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.862619 master-0 kubenswrapper[33867]: I0219 03:41:34.862478 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.864493 master-0 kubenswrapper[33867]: I0219 03:41:34.864408 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.864661 master-0 kubenswrapper[33867]: I0219 03:41:34.864629 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh7n\" (UniqueName: \"kubernetes.io/projected/0297a953-f1ca-434c-a52b-bd94277921f3-kube-api-access-4sh7n\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.967245 master-0 kubenswrapper[33867]: I0219 03:41:34.967125 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.967584 master-0 kubenswrapper[33867]: I0219 03:41:34.967555 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.967654 master-0 kubenswrapper[33867]: I0219 03:41:34.967596 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.967654 master-0 kubenswrapper[33867]: I0219 03:41:34.967634 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67sd4\" (UniqueName: \"kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.968925 master-0 kubenswrapper[33867]: I0219 03:41:34.967993 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.968925 master-0 kubenswrapper[33867]: I0219 03:41:34.968135 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.968925 master-0 kubenswrapper[33867]: I0219 03:41:34.968214 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.968925 master-0 kubenswrapper[33867]: I0219 03:41:34.968314 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo\") pod \"48632170-8e01-4f9e-8ade-2662bfb392b2\" (UID: \"48632170-8e01-4f9e-8ade-2662bfb392b2\") " Feb 19 03:41:34.968925 master-0 kubenswrapper[33867]: I0219 03:41:34.968875 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh7n\" (UniqueName: \"kubernetes.io/projected/0297a953-f1ca-434c-a52b-bd94277921f3-kube-api-access-4sh7n\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.969209 master-0 kubenswrapper[33867]: I0219 03:41:34.969084 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.969209 master-0 kubenswrapper[33867]: I0219 03:41:34.969117 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.980036 master-0 kubenswrapper[33867]: I0219 03:41:34.973628 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.990344 master-0 kubenswrapper[33867]: I0219 03:41:34.981836 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs" (OuterVolumeSpecName: "logs") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:34.990344 master-0 kubenswrapper[33867]: I0219 03:41:34.988971 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:34.996296 master-0 kubenswrapper[33867]: I0219 03:41:34.994709 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.996296 master-0 kubenswrapper[33867]: I0219 03:41:34.995587 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0297a953-f1ca-434c-a52b-bd94277921f3-openstack-config\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:34.996296 master-0 kubenswrapper[33867]: I0219 03:41:34.995877 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4" (OuterVolumeSpecName: "kube-api-access-67sd4") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "kube-api-access-67sd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:34.999605 master-0 kubenswrapper[33867]: I0219 03:41:34.999426 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts" (OuterVolumeSpecName: "scripts") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.003627 master-0 kubenswrapper[33867]: I0219 03:41:35.003529 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0297a953-f1ca-434c-a52b-bd94277921f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:35.026755 master-0 kubenswrapper[33867]: I0219 03:41:35.026673 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 03:41:35.041450 master-0 kubenswrapper[33867]: I0219 03:41:35.041358 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.057058 master-0 kubenswrapper[33867]: I0219 03:41:35.056927 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh7n\" (UniqueName: \"kubernetes.io/projected/0297a953-f1ca-434c-a52b-bd94277921f3-kube-api-access-4sh7n\") pod \"openstackclient\" (UID: \"0297a953-f1ca-434c-a52b-bd94277921f3\") " pod="openstack/openstackclient" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076519 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076572 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076588 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67sd4\" (UniqueName: \"kubernetes.io/projected/48632170-8e01-4f9e-8ade-2662bfb392b2-kube-api-access-67sd4\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076600 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076613 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.076908 master-0 kubenswrapper[33867]: I0219 03:41:35.076625 33867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/48632170-8e01-4f9e-8ade-2662bfb392b2-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.159974 master-0 kubenswrapper[33867]: I0219 03:41:35.159871 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:35.164576 master-0 kubenswrapper[33867]: I0219 03:41:35.164451 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data" (OuterVolumeSpecName: "config-data") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.181489 master-0 kubenswrapper[33867]: I0219 03:41:35.181407 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj75g\" (UniqueName: \"kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g\") pod \"openstackclient\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " pod="openstack/openstackclient" Feb 19 03:41:35.182514 master-0 kubenswrapper[33867]: I0219 03:41:35.182488 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.190765 master-0 kubenswrapper[33867]: E0219 03:41:35.190711 33867 projected.go:194] Error preparing data for projected volume kube-api-access-rj75g for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1) does not match the UID in record. The object might have been deleted and then recreated Feb 19 03:41:35.191008 master-0 kubenswrapper[33867]: E0219 03:41:35.190802 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g podName:577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1 nodeName:}" failed. No retries permitted until 2026-02-19 03:41:36.190781 +0000 UTC m=+1101.487451611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rj75g" (UniqueName: "kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g") pod "openstackclient" (UID: "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1) does not match the UID in record. The object might have been deleted and then recreated Feb 19 03:41:35.198185 master-0 kubenswrapper[33867]: I0219 03:41:35.197487 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48632170-8e01-4f9e-8ade-2662bfb392b2" (UID: "48632170-8e01-4f9e-8ade-2662bfb392b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.232550 master-0 kubenswrapper[33867]: I0219 03:41:35.232450 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8f98b7745-89hd2" podUID="3d0a6cb0-eaf6-4c74-bc37-bdb604b4df84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.230:5353: i/o timeout" Feb 19 03:41:35.286430 master-0 kubenswrapper[33867]: I0219 03:41:35.284980 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48632170-8e01-4f9e-8ade-2662bfb392b2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.474930 master-0 kubenswrapper[33867]: I0219 03:41:35.474848 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:35.627907 master-0 kubenswrapper[33867]: I0219 03:41:35.627853 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5bcd64b574-gx489" event={"ID":"48632170-8e01-4f9e-8ade-2662bfb392b2","Type":"ContainerDied","Data":"d36acdbb7211143d9be6418379219657803d20ecd1a2b9b5833861d1d418c8d8"} Feb 19 03:41:35.627999 master-0 kubenswrapper[33867]: I0219 03:41:35.627939 33867 scope.go:117] "RemoveContainer" containerID="de040ccb428b37344e4f70fa4df00b79a3f0ef079d8dfaa0ae0c2a7320ef69b7" Feb 19 03:41:35.628070 master-0 kubenswrapper[33867]: I0219 03:41:35.627895 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5bcd64b574-gx489" Feb 19 03:41:35.634245 master-0 kubenswrapper[33867]: I0219 03:41:35.634046 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nrrkp" event={"ID":"c772151f-fa4c-44ae-8d31-3e53872c20e7","Type":"ContainerStarted","Data":"4bb76404666bfc6a3c94550fe9b7590042fb09a5ad0fc3bef49002b902f3bc14"} Feb 19 03:41:35.637882 master-0 kubenswrapper[33867]: I0219 03:41:35.637800 33867 generic.go:334] "Generic (PLEG): container finished" podID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" containerID="30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5" exitCode=1 Feb 19 03:41:35.638007 master-0 kubenswrapper[33867]: I0219 03:41:35.637888 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:35.638725 master-0 kubenswrapper[33867]: I0219 03:41:35.638701 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerDied","Data":"30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5"} Feb 19 03:41:35.639633 master-0 kubenswrapper[33867]: I0219 03:41:35.639610 33867 scope.go:117] "RemoveContainer" containerID="30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5" Feb 19 03:41:35.639993 master-0 kubenswrapper[33867]: E0219 03:41:35.639953 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-64cdd9cf48-dg7ws_openstack(6a7f405f-ed33-4311-84a9-6aaf1fd4dadb)\"" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" podUID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" Feb 19 03:41:35.690026 master-0 kubenswrapper[33867]: I0219 03:41:35.689979 33867 scope.go:117] "RemoveContainer" containerID="a147e3b9cd363931f92adee6f18cb36c5a2776e443f67c6f9bcf0199cef58205" Feb 19 03:41:35.732892 master-0 kubenswrapper[33867]: I0219 03:41:35.732738 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:35.747487 master-0 kubenswrapper[33867]: I0219 03:41:35.747409 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" podUID="0297a953-f1ca-434c-a52b-bd94277921f3" Feb 19 03:41:35.774609 master-0 kubenswrapper[33867]: I0219 03:41:35.774544 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 03:41:35.794734 master-0 kubenswrapper[33867]: I0219 03:41:35.794530 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:35.797014 master-0 kubenswrapper[33867]: I0219 03:41:35.796850 33867 scope.go:117] "RemoveContainer" containerID="31301b894042c9b632ac03692aef577af55d2be8fc6af19977e6b550e95eedeb" Feb 19 03:41:35.797602 master-0 kubenswrapper[33867]: I0219 03:41:35.797544 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config\") pod \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " Feb 19 03:41:35.797679 master-0 kubenswrapper[33867]: I0219 03:41:35.797633 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle\") pod \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " Feb 19 03:41:35.797823 master-0 kubenswrapper[33867]: I0219 03:41:35.797794 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret\") pod \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\" (UID: \"577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1\") " Feb 19 03:41:35.798564 master-0 kubenswrapper[33867]: I0219 03:41:35.798535 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj75g\" (UniqueName: \"kubernetes.io/projected/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-kube-api-access-rj75g\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.798888 master-0 kubenswrapper[33867]: I0219 03:41:35.798748 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" (UID: "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:35.801517 master-0 kubenswrapper[33867]: I0219 03:41:35.801246 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" (UID: "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.801517 master-0 kubenswrapper[33867]: I0219 03:41:35.801464 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" (UID: "577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:35.807672 master-0 kubenswrapper[33867]: W0219 03:41:35.807488 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0297a953_f1ca_434c_a52b_bd94277921f3.slice/crio-eb9706d1fa7b15d62604c938ebd8fb76101ba9d96943a705bc7ec3319c124ffb WatchSource:0}: Error finding container eb9706d1fa7b15d62604c938ebd8fb76101ba9d96943a705bc7ec3319c124ffb: Status 404 returned error can't find the container with id eb9706d1fa7b15d62604c938ebd8fb76101ba9d96943a705bc7ec3319c124ffb Feb 19 03:41:35.809367 master-0 kubenswrapper[33867]: I0219 03:41:35.809310 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-5bcd64b574-gx489"] Feb 19 03:41:35.901051 master-0 kubenswrapper[33867]: I0219 03:41:35.900939 33867 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.901051 master-0 kubenswrapper[33867]: I0219 03:41:35.901020 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.901051 master-0 kubenswrapper[33867]: I0219 03:41:35.901039 33867 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1-openstack-config-secret\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:35.960308 master-0 kubenswrapper[33867]: I0219 03:41:35.960211 33867 scope.go:117] "RemoveContainer" containerID="925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4" Feb 19 03:41:36.658402 master-0 kubenswrapper[33867]: I0219 03:41:36.658329 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0297a953-f1ca-434c-a52b-bd94277921f3","Type":"ContainerStarted","Data":"eb9706d1fa7b15d62604c938ebd8fb76101ba9d96943a705bc7ec3319c124ffb"} Feb 19 03:41:36.666341 master-0 kubenswrapper[33867]: I0219 03:41:36.665294 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 03:41:36.688058 master-0 kubenswrapper[33867]: I0219 03:41:36.687364 33867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" podUID="0297a953-f1ca-434c-a52b-bd94277921f3" Feb 19 03:41:36.969431 master-0 kubenswrapper[33867]: I0219 03:41:36.969230 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" path="/var/lib/kubelet/pods/48632170-8e01-4f9e-8ade-2662bfb392b2/volumes" Feb 19 03:41:36.970534 master-0 kubenswrapper[33867]: I0219 03:41:36.970508 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1" path="/var/lib/kubelet/pods/577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1/volumes" Feb 19 03:41:38.310914 master-0 kubenswrapper[33867]: I0219 03:41:38.310739 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:38.310914 master-0 kubenswrapper[33867]: I0219 03:41:38.310806 33867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:38.312038 master-0 kubenswrapper[33867]: I0219 03:41:38.311997 33867 scope.go:117] "RemoveContainer" containerID="30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5" Feb 19 03:41:38.312505 master-0 kubenswrapper[33867]: E0219 03:41:38.312447 33867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-64cdd9cf48-dg7ws_openstack(6a7f405f-ed33-4311-84a9-6aaf1fd4dadb)\"" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" podUID="6a7f405f-ed33-4311-84a9-6aaf1fd4dadb" Feb 19 03:41:38.723747 master-0 kubenswrapper[33867]: I0219 03:41:38.723687 33867 generic.go:334] "Generic (PLEG): container finished" podID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerID="02d8c4a7ba4a68827423bebed8062278759249b93f8d9e239c301d82506a22cf" exitCode=137 Feb 19 03:41:38.723747 master-0 kubenswrapper[33867]: I0219 03:41:38.723743 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerDied","Data":"02d8c4a7ba4a68827423bebed8062278759249b93f8d9e239c301d82506a22cf"} Feb 19 03:41:38.790281 master-0 kubenswrapper[33867]: I0219 03:41:38.790172 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-054a4-api-0" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-api" probeResult="failure" output="Get \"http://10.128.0.229:8776/healthcheck\": dial tcp 10.128.0.229:8776: connect: connection refused" Feb 19 03:41:39.298239 master-0 kubenswrapper[33867]: I0219 03:41:39.291925 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:39.398841 master-0 kubenswrapper[33867]: I0219 03:41:39.398770 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b57897cc4-nd9ff"] Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: E0219 03:41:39.399476 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api-log" Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: I0219 03:41:39.399499 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api-log" Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: E0219 03:41:39.399531 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: I0219 03:41:39.399540 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: E0219 03:41:39.399552 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="init" Feb 19 03:41:39.399566 master-0 kubenswrapper[33867]: I0219 03:41:39.399561 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="init" Feb 19 03:41:39.399819 master-0 kubenswrapper[33867]: E0219 03:41:39.399619 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-054a4-api-log" Feb 19 03:41:39.399819 master-0 kubenswrapper[33867]: I0219 03:41:39.399630 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-054a4-api-log" Feb 19 03:41:39.399819 master-0 kubenswrapper[33867]: E0219 03:41:39.399651 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-api" Feb 19 03:41:39.399819 master-0 kubenswrapper[33867]: I0219 03:41:39.399660 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-api" Feb 19 03:41:39.399972 master-0 kubenswrapper[33867]: I0219 03:41:39.399931 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.399972 master-0 kubenswrapper[33867]: I0219 03:41:39.399986 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-054a4-api-log" Feb 19 03:41:39.400076 master-0 kubenswrapper[33867]: I0219 03:41:39.400034 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api-log" Feb 19 03:41:39.400076 master-0 kubenswrapper[33867]: I0219 03:41:39.400063 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" containerName="cinder-api" Feb 19 03:41:39.400404 master-0 kubenswrapper[33867]: E0219 03:41:39.400378 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.400404 master-0 kubenswrapper[33867]: I0219 03:41:39.400400 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.400756 master-0 kubenswrapper[33867]: I0219 03:41:39.400715 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="48632170-8e01-4f9e-8ade-2662bfb392b2" containerName="ironic-api" Feb 19 03:41:39.408612 master-0 kubenswrapper[33867]: I0219 03:41:39.408531 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.408952 master-0 kubenswrapper[33867]: I0219 03:41:39.408698 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.408952 master-0 kubenswrapper[33867]: I0219 03:41:39.408771 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.408952 master-0 kubenswrapper[33867]: I0219 03:41:39.408855 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fqb5\" (UniqueName: \"kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.408952 master-0 kubenswrapper[33867]: I0219 03:41:39.408885 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.408952 master-0 kubenswrapper[33867]: I0219 03:41:39.408924 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.409338 master-0 kubenswrapper[33867]: I0219 03:41:39.409038 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle\") pod \"76f7e0ac-da68-49e2-b643-53f9c614e19d\" (UID: \"76f7e0ac-da68-49e2-b643-53f9c614e19d\") " Feb 19 03:41:39.411178 master-0 kubenswrapper[33867]: I0219 03:41:39.411134 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs" (OuterVolumeSpecName: "logs") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:39.411268 master-0 kubenswrapper[33867]: I0219 03:41:39.411205 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 03:41:39.436637 master-0 kubenswrapper[33867]: I0219 03:41:39.436574 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.440852 master-0 kubenswrapper[33867]: I0219 03:41:39.440791 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 03:41:39.441191 master-0 kubenswrapper[33867]: I0219 03:41:39.441166 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 03:41:39.441474 master-0 kubenswrapper[33867]: I0219 03:41:39.441448 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 03:41:39.447354 master-0 kubenswrapper[33867]: I0219 03:41:39.447284 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5" (OuterVolumeSpecName: "kube-api-access-2fqb5") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "kube-api-access-2fqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:39.447653 master-0 kubenswrapper[33867]: I0219 03:41:39.447616 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b57897cc4-nd9ff"] Feb 19 03:41:39.477331 master-0 kubenswrapper[33867]: I0219 03:41:39.473042 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:39.501998 master-0 kubenswrapper[33867]: I0219 03:41:39.501929 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:39.502251 master-0 kubenswrapper[33867]: I0219 03:41:39.502066 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts" (OuterVolumeSpecName: "scripts") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:39.509712 master-0 kubenswrapper[33867]: I0219 03:41:39.505693 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:39.509712 master-0 kubenswrapper[33867]: I0219 03:41:39.506013 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-fa7ca-default-internal-api-0" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-log" containerID="cri-o://2563a7263a151820b358208de27903388222556115ee5cc370d1acb4f022dc27" gracePeriod=30 Feb 19 03:41:39.509712 master-0 kubenswrapper[33867]: I0219 03:41:39.506733 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-fa7ca-default-internal-api-0" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-httpd" containerID="cri-o://98edb312abf6d88201dd07ab17b30f07f7783fb53186b8f810ba90ae532fdae1" gracePeriod=30 Feb 19 03:41:39.512032 master-0 kubenswrapper[33867]: I0219 03:41:39.511906 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-run-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512100 master-0 kubenswrapper[33867]: I0219 03:41:39.512051 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddj2\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-kube-api-access-9ddj2\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512100 master-0 kubenswrapper[33867]: I0219 03:41:39.512081 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-internal-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512204 master-0 kubenswrapper[33867]: I0219 03:41:39.512134 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-config-data\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512204 master-0 kubenswrapper[33867]: I0219 03:41:39.512180 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-log-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512299 master-0 kubenswrapper[33867]: I0219 03:41:39.512205 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-combined-ca-bundle\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512680 master-0 kubenswrapper[33867]: I0219 03:41:39.512614 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-etc-swift\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512736 master-0 kubenswrapper[33867]: I0219 03:41:39.512684 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-public-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.512878 master-0 kubenswrapper[33867]: I0219 03:41:39.512855 33867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76f7e0ac-da68-49e2-b643-53f9c614e19d-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.512919 master-0 kubenswrapper[33867]: I0219 03:41:39.512881 33867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.512956 master-0 kubenswrapper[33867]: I0219 03:41:39.512920 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76f7e0ac-da68-49e2-b643-53f9c614e19d-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.512956 master-0 kubenswrapper[33867]: I0219 03:41:39.512939 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fqb5\" (UniqueName: \"kubernetes.io/projected/76f7e0ac-da68-49e2-b643-53f9c614e19d-kube-api-access-2fqb5\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.512956 master-0 kubenswrapper[33867]: I0219 03:41:39.512953 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.513640 master-0 kubenswrapper[33867]: I0219 03:41:39.512966 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.542118 master-0 kubenswrapper[33867]: I0219 03:41:39.541987 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data" (OuterVolumeSpecName: "config-data") pod "76f7e0ac-da68-49e2-b643-53f9c614e19d" (UID: "76f7e0ac-da68-49e2-b643-53f9c614e19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:39.608631 master-0 kubenswrapper[33867]: I0219 03:41:39.608555 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-747c56bd5-sdd55" Feb 19 03:41:39.614719 master-0 kubenswrapper[33867]: I0219 03:41:39.614680 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-etc-swift\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.614863 master-0 kubenswrapper[33867]: I0219 03:41:39.614728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-public-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.614926 master-0 kubenswrapper[33867]: I0219 03:41:39.614904 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-run-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.614987 master-0 kubenswrapper[33867]: I0219 03:41:39.614970 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ddj2\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-kube-api-access-9ddj2\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.615039 master-0 kubenswrapper[33867]: I0219 03:41:39.614990 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-internal-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.615039 master-0 kubenswrapper[33867]: I0219 03:41:39.615026 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-config-data\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.615132 master-0 kubenswrapper[33867]: I0219 03:41:39.615057 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-log-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.615132 master-0 kubenswrapper[33867]: I0219 03:41:39.615077 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-combined-ca-bundle\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.615352 master-0 kubenswrapper[33867]: I0219 03:41:39.615141 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f7e0ac-da68-49e2-b643-53f9c614e19d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:39.619428 master-0 kubenswrapper[33867]: I0219 03:41:39.619360 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-run-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.620064 master-0 kubenswrapper[33867]: I0219 03:41:39.620033 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-etc-swift\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.620955 master-0 kubenswrapper[33867]: I0219 03:41:39.620907 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810cab61-d654-4926-a83f-51af67acafd0-log-httpd\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.621966 master-0 kubenswrapper[33867]: I0219 03:41:39.621934 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-combined-ca-bundle\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.624677 master-0 kubenswrapper[33867]: I0219 03:41:39.624349 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-config-data\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.629896 master-0 kubenswrapper[33867]: I0219 03:41:39.628166 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-public-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.635882 master-0 kubenswrapper[33867]: I0219 03:41:39.635823 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/810cab61-d654-4926-a83f-51af67acafd0-internal-tls-certs\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.647357 master-0 kubenswrapper[33867]: I0219 03:41:39.647281 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ddj2\" (UniqueName: \"kubernetes.io/projected/810cab61-d654-4926-a83f-51af67acafd0-kube-api-access-9ddj2\") pod \"swift-proxy-6b57897cc4-nd9ff\" (UID: \"810cab61-d654-4926-a83f-51af67acafd0\") " pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.748661 master-0 kubenswrapper[33867]: I0219 03:41:39.730761 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:39.748661 master-0 kubenswrapper[33867]: I0219 03:41:39.731025 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8bf57b44-qh2fj" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-api" containerID="cri-o://b1a122d0f945bf5254ddc70fbcf28ed8ce928b8999ecb30e5f20bd8a2a10bc62" gracePeriod=30 Feb 19 03:41:39.748661 master-0 kubenswrapper[33867]: I0219 03:41:39.731691 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8bf57b44-qh2fj" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-httpd" containerID="cri-o://a14fd526c0f0bc6abd26f9706021df407bb2614e997ea965690fdeaef153bf7d" gracePeriod=30 Feb 19 03:41:39.778101 master-0 kubenswrapper[33867]: I0219 03:41:39.777843 33867 generic.go:334] "Generic (PLEG): container finished" podID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerID="2563a7263a151820b358208de27903388222556115ee5cc370d1acb4f022dc27" exitCode=143 Feb 19 03:41:39.778101 master-0 kubenswrapper[33867]: I0219 03:41:39.777935 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerDied","Data":"2563a7263a151820b358208de27903388222556115ee5cc370d1acb4f022dc27"} Feb 19 03:41:39.782860 master-0 kubenswrapper[33867]: I0219 03:41:39.782815 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"76f7e0ac-da68-49e2-b643-53f9c614e19d","Type":"ContainerDied","Data":"6f9001c4038c200f6ff3d559aed05cc0d03b3c2bebd20d5d3f5acd793842c7e2"} Feb 19 03:41:39.782963 master-0 kubenswrapper[33867]: I0219 03:41:39.782885 33867 scope.go:117] "RemoveContainer" containerID="02d8c4a7ba4a68827423bebed8062278759249b93f8d9e239c301d82506a22cf" Feb 19 03:41:39.782963 master-0 kubenswrapper[33867]: I0219 03:41:39.782882 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:39.932726 master-0 kubenswrapper[33867]: I0219 03:41:39.927945 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:39.997890 master-0 kubenswrapper[33867]: I0219 03:41:39.997727 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:40.039887 master-0 kubenswrapper[33867]: I0219 03:41:40.039813 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:40.074325 master-0 kubenswrapper[33867]: I0219 03:41:40.074213 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:40.077577 master-0 kubenswrapper[33867]: I0219 03:41:40.077524 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.080373 master-0 kubenswrapper[33867]: I0219 03:41:40.080311 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 03:41:40.080967 master-0 kubenswrapper[33867]: I0219 03:41:40.080938 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 03:41:40.081181 master-0 kubenswrapper[33867]: I0219 03:41:40.081150 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-054a4-api-config-data" Feb 19 03:41:40.105853 master-0 kubenswrapper[33867]: I0219 03:41:40.105665 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.131912 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-public-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132055 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-scripts\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132144 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132244 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132373 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn7fp\" (UniqueName: \"kubernetes.io/projected/da327fb4-7852-4866-bb8f-8b2930854e24-kube-api-access-bn7fp\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132407 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-internal-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132546 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132718 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da327fb4-7852-4866-bb8f-8b2930854e24-logs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.138683 master-0 kubenswrapper[33867]: I0219 03:41:40.132868 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da327fb4-7852-4866-bb8f-8b2930854e24-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.236615 master-0 kubenswrapper[33867]: I0219 03:41:40.236478 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.236888 master-0 kubenswrapper[33867]: I0219 03:41:40.236670 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn7fp\" (UniqueName: \"kubernetes.io/projected/da327fb4-7852-4866-bb8f-8b2930854e24-kube-api-access-bn7fp\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.236888 master-0 kubenswrapper[33867]: I0219 03:41:40.236708 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-internal-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.236888 master-0 kubenswrapper[33867]: I0219 03:41:40.236868 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.236998 master-0 kubenswrapper[33867]: I0219 03:41:40.236971 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da327fb4-7852-4866-bb8f-8b2930854e24-logs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.237086 master-0 kubenswrapper[33867]: I0219 03:41:40.237053 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da327fb4-7852-4866-bb8f-8b2930854e24-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.237142 master-0 kubenswrapper[33867]: I0219 03:41:40.237114 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-public-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.237202 master-0 kubenswrapper[33867]: I0219 03:41:40.237146 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-scripts\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.237389 master-0 kubenswrapper[33867]: I0219 03:41:40.237304 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da327fb4-7852-4866-bb8f-8b2930854e24-etc-machine-id\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.237547 master-0 kubenswrapper[33867]: I0219 03:41:40.237502 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.240975 master-0 kubenswrapper[33867]: I0219 03:41:40.240936 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da327fb4-7852-4866-bb8f-8b2930854e24-logs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.243715 master-0 kubenswrapper[33867]: I0219 03:41:40.242609 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.243715 master-0 kubenswrapper[33867]: I0219 03:41:40.243533 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-scripts\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.247278 master-0 kubenswrapper[33867]: I0219 03:41:40.246833 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-config-data-custom\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.247278 master-0 kubenswrapper[33867]: I0219 03:41:40.246894 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-public-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.257353 master-0 kubenswrapper[33867]: I0219 03:41:40.257216 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-internal-tls-certs\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.260286 master-0 kubenswrapper[33867]: I0219 03:41:40.259577 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da327fb4-7852-4866-bb8f-8b2930854e24-combined-ca-bundle\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.263319 master-0 kubenswrapper[33867]: I0219 03:41:40.262933 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn7fp\" (UniqueName: \"kubernetes.io/projected/da327fb4-7852-4866-bb8f-8b2930854e24-kube-api-access-bn7fp\") pod \"cinder-054a4-api-0\" (UID: \"da327fb4-7852-4866-bb8f-8b2930854e24\") " pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.436281 master-0 kubenswrapper[33867]: I0219 03:41:40.435868 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:40.798288 master-0 kubenswrapper[33867]: I0219 03:41:40.797349 33867 generic.go:334] "Generic (PLEG): container finished" podID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerID="a14fd526c0f0bc6abd26f9706021df407bb2614e997ea965690fdeaef153bf7d" exitCode=0 Feb 19 03:41:40.798288 master-0 kubenswrapper[33867]: I0219 03:41:40.797421 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerDied","Data":"a14fd526c0f0bc6abd26f9706021df407bb2614e997ea965690fdeaef153bf7d"} Feb 19 03:41:40.989152 master-0 kubenswrapper[33867]: I0219 03:41:40.988097 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f7e0ac-da68-49e2-b643-53f9c614e19d" path="/var/lib/kubelet/pods/76f7e0ac-da68-49e2-b643-53f9c614e19d/volumes" Feb 19 03:41:41.847521 master-0 kubenswrapper[33867]: I0219 03:41:41.847397 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:41.848102 master-0 kubenswrapper[33867]: I0219 03:41:41.847977 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-fa7ca-default-external-api-0" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-log" containerID="cri-o://1eabca20093373df83e4b5f361eef75ee9bfee8f6d5428d367dd26bfe64d8506" gracePeriod=30 Feb 19 03:41:41.848195 master-0 kubenswrapper[33867]: I0219 03:41:41.848082 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-fa7ca-default-external-api-0" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-httpd" containerID="cri-o://82132e2ca958fb96f24d639e7aeb7d9ac14df2a09348184346c53afe197cadad" gracePeriod=30 Feb 19 03:41:42.839305 master-0 kubenswrapper[33867]: I0219 03:41:42.839226 33867 generic.go:334] "Generic (PLEG): container finished" podID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerID="1eabca20093373df83e4b5f361eef75ee9bfee8f6d5428d367dd26bfe64d8506" exitCode=143 Feb 19 03:41:42.839524 master-0 kubenswrapper[33867]: I0219 03:41:42.839351 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerDied","Data":"1eabca20093373df83e4b5f361eef75ee9bfee8f6d5428d367dd26bfe64d8506"} Feb 19 03:41:42.843511 master-0 kubenswrapper[33867]: I0219 03:41:42.843325 33867 generic.go:334] "Generic (PLEG): container finished" podID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerID="98edb312abf6d88201dd07ab17b30f07f7783fb53186b8f810ba90ae532fdae1" exitCode=0 Feb 19 03:41:42.843511 master-0 kubenswrapper[33867]: I0219 03:41:42.843412 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerDied","Data":"98edb312abf6d88201dd07ab17b30f07f7783fb53186b8f810ba90ae532fdae1"} Feb 19 03:41:43.858520 master-0 kubenswrapper[33867]: I0219 03:41:43.858442 33867 generic.go:334] "Generic (PLEG): container finished" podID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerID="b1a122d0f945bf5254ddc70fbcf28ed8ce928b8999ecb30e5f20bd8a2a10bc62" exitCode=0 Feb 19 03:41:43.859329 master-0 kubenswrapper[33867]: I0219 03:41:43.858512 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerDied","Data":"b1a122d0f945bf5254ddc70fbcf28ed8ce928b8999ecb30e5f20bd8a2a10bc62"} Feb 19 03:41:44.766680 master-0 kubenswrapper[33867]: I0219 03:41:44.763700 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-74msg"] Feb 19 03:41:44.768392 master-0 kubenswrapper[33867]: I0219 03:41:44.767765 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:44.803976 master-0 kubenswrapper[33867]: I0219 03:41:44.803906 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-74msg"] Feb 19 03:41:44.852159 master-0 kubenswrapper[33867]: I0219 03:41:44.852065 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vv24r"] Feb 19 03:41:44.855078 master-0 kubenswrapper[33867]: I0219 03:41:44.855034 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:44.872294 master-0 kubenswrapper[33867]: I0219 03:41:44.871712 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vv24r"] Feb 19 03:41:44.918595 master-0 kubenswrapper[33867]: I0219 03:41:44.918342 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:44.918987 master-0 kubenswrapper[33867]: I0219 03:41:44.918848 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcv7\" (UniqueName: \"kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:44.954111 master-0 kubenswrapper[33867]: I0219 03:41:44.950839 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-k2929"] Feb 19 03:41:44.954111 master-0 kubenswrapper[33867]: I0219 03:41:44.952748 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.022558 master-0 kubenswrapper[33867]: I0219 03:41:45.022326 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-k2929"] Feb 19 03:41:45.024922 master-0 kubenswrapper[33867]: I0219 03:41:45.024847 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:45.025020 master-0 kubenswrapper[33867]: I0219 03:41:45.024984 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.025139 master-0 kubenswrapper[33867]: I0219 03:41:45.025109 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74vj\" (UniqueName: \"kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.025188 master-0 kubenswrapper[33867]: I0219 03:41:45.025158 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcv7\" (UniqueName: \"kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:45.029422 master-0 kubenswrapper[33867]: I0219 03:41:45.027208 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:45.044204 master-0 kubenswrapper[33867]: I0219 03:41:45.044128 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1db7-account-create-update-kprcb"] Feb 19 03:41:45.048442 master-0 kubenswrapper[33867]: I0219 03:41:45.048398 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.051092 master-0 kubenswrapper[33867]: I0219 03:41:45.051055 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 03:41:45.056282 master-0 kubenswrapper[33867]: I0219 03:41:45.056232 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcv7\" (UniqueName: \"kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7\") pod \"nova-api-db-create-74msg\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:45.081290 master-0 kubenswrapper[33867]: I0219 03:41:45.080638 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1db7-account-create-update-kprcb"] Feb 19 03:41:45.118929 master-0 kubenswrapper[33867]: I0219 03:41:45.118813 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:45.131068 master-0 kubenswrapper[33867]: I0219 03:41:45.131006 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.131277 master-0 kubenswrapper[33867]: I0219 03:41:45.131209 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.131337 master-0 kubenswrapper[33867]: I0219 03:41:45.131277 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74vj\" (UniqueName: \"kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.132176 master-0 kubenswrapper[33867]: I0219 03:41:45.131436 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bww\" (UniqueName: \"kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.132817 master-0 kubenswrapper[33867]: I0219 03:41:45.132782 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.151540 master-0 kubenswrapper[33867]: I0219 03:41:45.151399 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-360e-account-create-update-mwmgf"] Feb 19 03:41:45.153755 master-0 kubenswrapper[33867]: I0219 03:41:45.153668 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.157996 master-0 kubenswrapper[33867]: I0219 03:41:45.157931 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 03:41:45.159832 master-0 kubenswrapper[33867]: I0219 03:41:45.159779 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74vj\" (UniqueName: \"kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj\") pod \"nova-cell0-db-create-vv24r\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.168066 master-0 kubenswrapper[33867]: I0219 03:41:45.164829 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-360e-account-create-update-mwmgf"] Feb 19 03:41:45.193614 master-0 kubenswrapper[33867]: I0219 03:41:45.193402 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:45.234237 master-0 kubenswrapper[33867]: I0219 03:41:45.234148 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bww\" (UniqueName: \"kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.235075 master-0 kubenswrapper[33867]: I0219 03:41:45.234989 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgms\" (UniqueName: \"kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.235321 master-0 kubenswrapper[33867]: I0219 03:41:45.235299 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.235706 master-0 kubenswrapper[33867]: I0219 03:41:45.235689 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.237125 master-0 kubenswrapper[33867]: I0219 03:41:45.237107 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.255327 master-0 kubenswrapper[33867]: I0219 03:41:45.255230 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bww\" (UniqueName: \"kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww\") pod \"nova-cell1-db-create-k2929\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.283226 master-0 kubenswrapper[33867]: I0219 03:41:45.283079 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:45.339722 master-0 kubenswrapper[33867]: I0219 03:41:45.339324 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.339722 master-0 kubenswrapper[33867]: I0219 03:41:45.339443 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgms\" (UniqueName: \"kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.339722 master-0 kubenswrapper[33867]: I0219 03:41:45.339492 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ndf\" (UniqueName: \"kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.339722 master-0 kubenswrapper[33867]: I0219 03:41:45.339555 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.343584 master-0 kubenswrapper[33867]: I0219 03:41:45.342954 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.362395 master-0 kubenswrapper[33867]: I0219 03:41:45.362294 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ab43-account-create-update-jwqxb"] Feb 19 03:41:45.387953 master-0 kubenswrapper[33867]: I0219 03:41:45.387834 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab43-account-create-update-jwqxb"] Feb 19 03:41:45.394076 master-0 kubenswrapper[33867]: I0219 03:41:45.394043 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgms\" (UniqueName: \"kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms\") pod \"nova-api-1db7-account-create-update-kprcb\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.396183 master-0 kubenswrapper[33867]: I0219 03:41:45.396068 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.398472 master-0 kubenswrapper[33867]: I0219 03:41:45.398435 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 03:41:45.441661 master-0 kubenswrapper[33867]: I0219 03:41:45.441601 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2ndf\" (UniqueName: \"kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.441905 master-0 kubenswrapper[33867]: I0219 03:41:45.441882 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.443055 master-0 kubenswrapper[33867]: I0219 03:41:45.443026 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.464525 master-0 kubenswrapper[33867]: I0219 03:41:45.464478 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2ndf\" (UniqueName: \"kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf\") pod \"nova-cell0-360e-account-create-update-mwmgf\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.549759 master-0 kubenswrapper[33867]: I0219 03:41:45.549457 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvpq\" (UniqueName: \"kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.549759 master-0 kubenswrapper[33867]: I0219 03:41:45.549607 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.642649 master-0 kubenswrapper[33867]: I0219 03:41:45.642543 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:45.652075 master-0 kubenswrapper[33867]: I0219 03:41:45.652025 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.652339 master-0 kubenswrapper[33867]: I0219 03:41:45.652302 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvpq\" (UniqueName: \"kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.652941 master-0 kubenswrapper[33867]: I0219 03:41:45.652891 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.666766 master-0 kubenswrapper[33867]: I0219 03:41:45.660319 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:45.669969 master-0 kubenswrapper[33867]: I0219 03:41:45.669917 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvpq\" (UniqueName: \"kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq\") pod \"nova-cell1-ab43-account-create-update-jwqxb\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.744515 master-0 kubenswrapper[33867]: I0219 03:41:45.744414 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:45.914171 master-0 kubenswrapper[33867]: I0219 03:41:45.914058 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerDied","Data":"82132e2ca958fb96f24d639e7aeb7d9ac14df2a09348184346c53afe197cadad"} Feb 19 03:41:45.914171 master-0 kubenswrapper[33867]: I0219 03:41:45.914087 33867 generic.go:334] "Generic (PLEG): container finished" podID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerID="82132e2ca958fb96f24d639e7aeb7d9ac14df2a09348184346c53afe197cadad" exitCode=0 Feb 19 03:41:46.674368 master-0 kubenswrapper[33867]: I0219 03:41:46.674317 33867 scope.go:117] "RemoveContainer" containerID="478c31c276f9022a5870fc83f58d6f9fdcecb2fa0129b84e9b9d9edd9a1e3c2e" Feb 19 03:41:47.574146 master-0 kubenswrapper[33867]: I0219 03:41:47.570640 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:47.683858 master-0 kubenswrapper[33867]: I0219 03:41:47.683752 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.684168 master-0 kubenswrapper[33867]: I0219 03:41:47.684111 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn252\" (UniqueName: \"kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.684274 master-0 kubenswrapper[33867]: I0219 03:41:47.684233 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.684815 master-0 kubenswrapper[33867]: I0219 03:41:47.684758 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.684924 master-0 kubenswrapper[33867]: I0219 03:41:47.684896 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.685462 master-0 kubenswrapper[33867]: I0219 03:41:47.685225 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.685462 master-0 kubenswrapper[33867]: I0219 03:41:47.685343 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.685600 master-0 kubenswrapper[33867]: I0219 03:41:47.685539 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts\") pod \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\" (UID: \"8c70b7f1-846a-4be2-bdd1-9214e7e75866\") " Feb 19 03:41:47.690021 master-0 kubenswrapper[33867]: I0219 03:41:47.689440 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:47.690021 master-0 kubenswrapper[33867]: I0219 03:41:47.689907 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs" (OuterVolumeSpecName: "logs") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:47.705940 master-0 kubenswrapper[33867]: I0219 03:41:47.705794 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts" (OuterVolumeSpecName: "scripts") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:47.721554 master-0 kubenswrapper[33867]: I0219 03:41:47.721434 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52" (OuterVolumeSpecName: "glance") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 03:41:47.725138 master-0 kubenswrapper[33867]: I0219 03:41:47.724900 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252" (OuterVolumeSpecName: "kube-api-access-pn252") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "kube-api-access-pn252". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:47.790234 master-0 kubenswrapper[33867]: I0219 03:41:47.789339 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.790234 master-0 kubenswrapper[33867]: I0219 03:41:47.789396 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn252\" (UniqueName: \"kubernetes.io/projected/8c70b7f1-846a-4be2-bdd1-9214e7e75866-kube-api-access-pn252\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.790234 master-0 kubenswrapper[33867]: I0219 03:41:47.789408 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.790234 master-0 kubenswrapper[33867]: I0219 03:41:47.789417 33867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c70b7f1-846a-4be2-bdd1-9214e7e75866-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.790234 master-0 kubenswrapper[33867]: I0219 03:41:47.789443 33867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") on node \"master-0\" " Feb 19 03:41:47.798918 master-0 kubenswrapper[33867]: I0219 03:41:47.798527 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:47.833518 master-0 kubenswrapper[33867]: I0219 03:41:47.829226 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data" (OuterVolumeSpecName: "config-data") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:47.859062 master-0 kubenswrapper[33867]: I0219 03:41:47.855113 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8c70b7f1-846a-4be2-bdd1-9214e7e75866" (UID: "8c70b7f1-846a-4be2-bdd1-9214e7e75866"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:47.891593 master-0 kubenswrapper[33867]: I0219 03:41:47.891485 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.891593 master-0 kubenswrapper[33867]: I0219 03:41:47.891559 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.891593 master-0 kubenswrapper[33867]: I0219 03:41:47.891576 33867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c70b7f1-846a-4be2-bdd1-9214e7e75866-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.901854 master-0 kubenswrapper[33867]: I0219 03:41:47.901804 33867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 03:41:47.902093 master-0 kubenswrapper[33867]: I0219 03:41:47.902059 33867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b" (UniqueName: "kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52") on node "master-0" Feb 19 03:41:47.975245 master-0 kubenswrapper[33867]: I0219 03:41:47.975095 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:47.993183 master-0 kubenswrapper[33867]: I0219 03:41:47.993010 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0297a953-f1ca-434c-a52b-bd94277921f3","Type":"ContainerStarted","Data":"df5eacf63f5c5cc747a52a4ed546ef0d75cce3b26ae27f4f0f59ad508d738a84"} Feb 19 03:41:47.995171 master-0 kubenswrapper[33867]: I0219 03:41:47.994945 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995278 master-0 kubenswrapper[33867]: I0219 03:41:47.995230 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995336 master-0 kubenswrapper[33867]: I0219 03:41:47.995310 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995528 master-0 kubenswrapper[33867]: I0219 03:41:47.995482 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995704 master-0 kubenswrapper[33867]: I0219 03:41:47.995665 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995918 master-0 kubenswrapper[33867]: I0219 03:41:47.995798 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995918 master-0 kubenswrapper[33867]: I0219 03:41:47.995860 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc798\" (UniqueName: \"kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.995918 master-0 kubenswrapper[33867]: I0219 03:41:47.995889 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs\") pod \"b19a1327-29e6-4354-bf31-ce295f5d758f\" (UID: \"b19a1327-29e6-4354-bf31-ce295f5d758f\") " Feb 19 03:41:47.996216 master-0 kubenswrapper[33867]: I0219 03:41:47.996111 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs" (OuterVolumeSpecName: "logs") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:47.996979 master-0 kubenswrapper[33867]: I0219 03:41:47.996733 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:47.997604 master-0 kubenswrapper[33867]: I0219 03:41:47.997371 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.997604 master-0 kubenswrapper[33867]: I0219 03:41:47.997403 33867 reconciler_common.go:293] "Volume detached for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:47.997604 master-0 kubenswrapper[33867]: I0219 03:41:47.997418 33867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19a1327-29e6-4354-bf31-ce295f5d758f-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.020625 master-0 kubenswrapper[33867]: I0219 03:41:48.020506 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140" (OuterVolumeSpecName: "glance") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "pvc-9b4cd943-1f61-4b27-8790-991add37bfec". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 03:41:48.020869 master-0 kubenswrapper[33867]: I0219 03:41:48.020679 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts" (OuterVolumeSpecName: "scripts") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:48.026233 master-0 kubenswrapper[33867]: I0219 03:41:48.025204 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798" (OuterVolumeSpecName: "kube-api-access-qc798") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "kube-api-access-qc798". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:48.042658 master-0 kubenswrapper[33867]: I0219 03:41:48.032282 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.042658 master-0 kubenswrapper[33867]: I0219 03:41:48.033335 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"b19a1327-29e6-4354-bf31-ce295f5d758f","Type":"ContainerDied","Data":"43682dd16568077d11bf64a1f627717bf368aad3364940e20a5fa43ac8a3d580"} Feb 19 03:41:48.042658 master-0 kubenswrapper[33867]: I0219 03:41:48.033666 33867 scope.go:117] "RemoveContainer" containerID="82132e2ca958fb96f24d639e7aeb7d9ac14df2a09348184346c53afe197cadad" Feb 19 03:41:48.061527 master-0 kubenswrapper[33867]: I0219 03:41:48.061402 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.796179504 podStartE2EDuration="14.061377429s" podCreationTimestamp="2026-02-19 03:41:34 +0000 UTC" firstStartedPulling="2026-02-19 03:41:35.812425785 +0000 UTC m=+1101.109096396" lastFinishedPulling="2026-02-19 03:41:47.07762371 +0000 UTC m=+1112.374294321" observedRunningTime="2026-02-19 03:41:48.034406805 +0000 UTC m=+1113.331077426" watchObservedRunningTime="2026-02-19 03:41:48.061377429 +0000 UTC m=+1113.358048050" Feb 19 03:41:48.065987 master-0 kubenswrapper[33867]: I0219 03:41:48.063085 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"8c70b7f1-846a-4be2-bdd1-9214e7e75866","Type":"ContainerDied","Data":"0e19237dfb9bba5851a65e1becb3c2f9f1ef4af461b0e7c5ef95dbe0c3219e36"} Feb 19 03:41:48.065987 master-0 kubenswrapper[33867]: I0219 03:41:48.063369 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.072150 master-0 kubenswrapper[33867]: I0219 03:41:48.072045 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nrrkp" event={"ID":"c772151f-fa4c-44ae-8d31-3e53872c20e7","Type":"ContainerStarted","Data":"abaee50973a80a362a798731ce0802ec29104a410488ef8a45f9ffbf5fbb5e0d"} Feb 19 03:41:48.087557 master-0 kubenswrapper[33867]: I0219 03:41:48.087449 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:48.110048 master-0 kubenswrapper[33867]: I0219 03:41:48.109971 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.110307 master-0 kubenswrapper[33867]: I0219 03:41:48.110094 33867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") on node \"master-0\" " Feb 19 03:41:48.110307 master-0 kubenswrapper[33867]: I0219 03:41:48.110117 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.110307 master-0 kubenswrapper[33867]: I0219 03:41:48.110136 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc798\" (UniqueName: \"kubernetes.io/projected/b19a1327-29e6-4354-bf31-ce295f5d758f-kube-api-access-qc798\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.117112 master-0 kubenswrapper[33867]: I0219 03:41:48.117031 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-nrrkp" podStartSLOduration=3.98106333 podStartE2EDuration="16.117004554s" podCreationTimestamp="2026-02-19 03:41:32 +0000 UTC" firstStartedPulling="2026-02-19 03:41:34.779502473 +0000 UTC m=+1100.076173084" lastFinishedPulling="2026-02-19 03:41:46.915443687 +0000 UTC m=+1112.212114308" observedRunningTime="2026-02-19 03:41:48.097188693 +0000 UTC m=+1113.393859304" watchObservedRunningTime="2026-02-19 03:41:48.117004554 +0000 UTC m=+1113.413675165" Feb 19 03:41:48.154710 master-0 kubenswrapper[33867]: I0219 03:41:48.154657 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data" (OuterVolumeSpecName: "config-data") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:48.194963 master-0 kubenswrapper[33867]: I0219 03:41:48.194277 33867 scope.go:117] "RemoveContainer" containerID="1eabca20093373df83e4b5f361eef75ee9bfee8f6d5428d367dd26bfe64d8506" Feb 19 03:41:48.195423 master-0 kubenswrapper[33867]: I0219 03:41:48.195337 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b19a1327-29e6-4354-bf31-ce295f5d758f" (UID: "b19a1327-29e6-4354-bf31-ce295f5d758f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:48.221687 master-0 kubenswrapper[33867]: I0219 03:41:48.214215 33867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.231277 master-0 kubenswrapper[33867]: I0219 03:41:48.230550 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19a1327-29e6-4354-bf31-ce295f5d758f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.231277 master-0 kubenswrapper[33867]: I0219 03:41:48.215661 33867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 03:41:48.231277 master-0 kubenswrapper[33867]: I0219 03:41:48.230848 33867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9b4cd943-1f61-4b27-8790-991add37bfec" (UniqueName: "kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140") on node "master-0" Feb 19 03:41:48.231277 master-0 kubenswrapper[33867]: I0219 03:41:48.213458 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:48.271452 master-0 kubenswrapper[33867]: I0219 03:41:48.271357 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:48.280797 master-0 kubenswrapper[33867]: I0219 03:41:48.280686 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:48.281696 master-0 kubenswrapper[33867]: E0219 03:41:48.281645 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-log" Feb 19 03:41:48.281696 master-0 kubenswrapper[33867]: I0219 03:41:48.281684 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-log" Feb 19 03:41:48.281849 master-0 kubenswrapper[33867]: E0219 03:41:48.281718 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-log" Feb 19 03:41:48.281849 master-0 kubenswrapper[33867]: I0219 03:41:48.281729 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-log" Feb 19 03:41:48.282002 master-0 kubenswrapper[33867]: E0219 03:41:48.281755 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-httpd" Feb 19 03:41:48.282002 master-0 kubenswrapper[33867]: I0219 03:41:48.281997 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-httpd" Feb 19 03:41:48.282115 master-0 kubenswrapper[33867]: E0219 03:41:48.282039 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-httpd" Feb 19 03:41:48.282115 master-0 kubenswrapper[33867]: I0219 03:41:48.282051 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-httpd" Feb 19 03:41:48.282516 master-0 kubenswrapper[33867]: I0219 03:41:48.282469 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-httpd" Feb 19 03:41:48.282604 master-0 kubenswrapper[33867]: I0219 03:41:48.282521 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-httpd" Feb 19 03:41:48.282604 master-0 kubenswrapper[33867]: I0219 03:41:48.282557 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" containerName="glance-log" Feb 19 03:41:48.282604 master-0 kubenswrapper[33867]: I0219 03:41:48.282586 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" containerName="glance-log" Feb 19 03:41:48.285504 master-0 kubenswrapper[33867]: I0219 03:41:48.285440 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.287915 master-0 kubenswrapper[33867]: I0219 03:41:48.287848 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-default-internal-config-data" Feb 19 03:41:48.289465 master-0 kubenswrapper[33867]: I0219 03:41:48.288825 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 03:41:48.297940 master-0 kubenswrapper[33867]: I0219 03:41:48.297562 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:48.321705 master-0 kubenswrapper[33867]: I0219 03:41:48.314532 33867 scope.go:117] "RemoveContainer" containerID="98edb312abf6d88201dd07ab17b30f07f7783fb53186b8f810ba90ae532fdae1" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.339294 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.339583 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.339690 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.339740 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.339817 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.340098 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkn8\" (UniqueName: \"kubernetes.io/projected/5f80387f-955e-4858-ad6b-fcfe3585e929-kube-api-access-cbkn8\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.340129 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.340321 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.342310 master-0 kubenswrapper[33867]: I0219 03:41:48.340627 33867 reconciler_common.go:293] "Volume detached for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.370007 master-0 kubenswrapper[33867]: I0219 03:41:48.369924 33867 scope.go:117] "RemoveContainer" containerID="2563a7263a151820b358208de27903388222556115ee5cc370d1acb4f022dc27" Feb 19 03:41:48.425385 master-0 kubenswrapper[33867]: I0219 03:41:48.425186 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:48.445291 master-0 kubenswrapper[33867]: I0219 03:41:48.445034 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.445291 master-0 kubenswrapper[33867]: I0219 03:41:48.445141 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.445841 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.445973 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.446139 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.446351 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkn8\" (UniqueName: \"kubernetes.io/projected/5f80387f-955e-4858-ad6b-fcfe3585e929-kube-api-access-cbkn8\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.446457 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.446746 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-logs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.447036 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.450613 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-scripts\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.451191 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.451220 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f18d4c35e8710889152413040b4d09f48db19ab30f1052671a3cdb6b7bd3618f/globalmount\"" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.452003 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-combined-ca-bundle\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.452376 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f80387f-955e-4858-ad6b-fcfe3585e929-httpd-run\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.455109 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-internal-tls-certs\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.455318 master-0 kubenswrapper[33867]: I0219 03:41:48.455200 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f80387f-955e-4858-ad6b-fcfe3585e929-config-data\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.490568 master-0 kubenswrapper[33867]: I0219 03:41:48.489524 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkn8\" (UniqueName: \"kubernetes.io/projected/5f80387f-955e-4858-ad6b-fcfe3585e929-kube-api-access-cbkn8\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:48.543362 master-0 kubenswrapper[33867]: I0219 03:41:48.541419 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:48.566465 master-0 kubenswrapper[33867]: I0219 03:41:48.566048 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:48.569205 master-0 kubenswrapper[33867]: I0219 03:41:48.569149 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.571894 master-0 kubenswrapper[33867]: I0219 03:41:48.571705 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 03:41:48.571894 master-0 kubenswrapper[33867]: I0219 03:41:48.571725 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-fa7ca-default-external-config-data" Feb 19 03:41:48.591534 master-0 kubenswrapper[33867]: I0219 03:41:48.591357 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658510 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658652 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658774 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fd7x\" (UniqueName: \"kubernetes.io/projected/115b48b9-768e-4e24-ba50-2d47e507b21b-kube-api-access-7fd7x\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658806 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658853 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658886 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.658912 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.659091 master-0 kubenswrapper[33867]: I0219 03:41:48.659020 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.727558 master-0 kubenswrapper[33867]: I0219 03:41:48.727483 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763118 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fd7x\" (UniqueName: \"kubernetes.io/projected/115b48b9-768e-4e24-ba50-2d47e507b21b-kube-api-access-7fd7x\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763188 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763228 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763280 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763303 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763374 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763484 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.764410 master-0 kubenswrapper[33867]: I0219 03:41:48.763526 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.770357 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-logs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.770360 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115b48b9-768e-4e24-ba50-2d47e507b21b-httpd-run\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.773845 33867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.773902 33867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e63bed68a8422647d47a275f434bf5fb098e771165527c16915b4f4dc977b2c9/globalmount\"" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.779074 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-config-data\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.783503 master-0 kubenswrapper[33867]: I0219 03:41:48.780493 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-scripts\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.797583 master-0 kubenswrapper[33867]: I0219 03:41:48.796404 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-combined-ca-bundle\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.806307 master-0 kubenswrapper[33867]: I0219 03:41:48.805359 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vv24r"] Feb 19 03:41:48.807922 master-0 kubenswrapper[33867]: I0219 03:41:48.807830 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fd7x\" (UniqueName: \"kubernetes.io/projected/115b48b9-768e-4e24-ba50-2d47e507b21b-kube-api-access-7fd7x\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.809680 master-0 kubenswrapper[33867]: I0219 03:41:48.809622 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115b48b9-768e-4e24-ba50-2d47e507b21b-public-tls-certs\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:48.867585 master-0 kubenswrapper[33867]: I0219 03:41:48.867513 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config\") pod \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " Feb 19 03:41:48.867850 master-0 kubenswrapper[33867]: I0219 03:41:48.867650 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs\") pod \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " Feb 19 03:41:48.867850 master-0 kubenswrapper[33867]: I0219 03:41:48.867746 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle\") pod \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " Feb 19 03:41:48.867850 master-0 kubenswrapper[33867]: I0219 03:41:48.867816 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmhc9\" (UniqueName: \"kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9\") pod \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " Feb 19 03:41:48.868191 master-0 kubenswrapper[33867]: I0219 03:41:48.867924 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config\") pod \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\" (UID: \"b23c38ff-0149-4b73-a4dd-f6aae99512d0\") " Feb 19 03:41:48.881170 master-0 kubenswrapper[33867]: I0219 03:41:48.879559 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1db7-account-create-update-kprcb"] Feb 19 03:41:48.884340 master-0 kubenswrapper[33867]: I0219 03:41:48.884206 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9" (OuterVolumeSpecName: "kube-api-access-fmhc9") pod "b23c38ff-0149-4b73-a4dd-f6aae99512d0" (UID: "b23c38ff-0149-4b73-a4dd-f6aae99512d0"). InnerVolumeSpecName "kube-api-access-fmhc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:48.890409 master-0 kubenswrapper[33867]: I0219 03:41:48.887423 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b23c38ff-0149-4b73-a4dd-f6aae99512d0" (UID: "b23c38ff-0149-4b73-a4dd-f6aae99512d0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:48.957952 master-0 kubenswrapper[33867]: W0219 03:41:48.954849 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92a62e19_1f19_49fd_b843_eafb8bc78662.slice/crio-9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369 WatchSource:0}: Error finding container 9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369: Status 404 returned error can't find the container with id 9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369 Feb 19 03:41:48.984201 master-0 kubenswrapper[33867]: I0219 03:41:48.983882 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c70b7f1-846a-4be2-bdd1-9214e7e75866" path="/var/lib/kubelet/pods/8c70b7f1-846a-4be2-bdd1-9214e7e75866/volumes" Feb 19 03:41:48.986986 master-0 kubenswrapper[33867]: I0219 03:41:48.986925 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19a1327-29e6-4354-bf31-ce295f5d758f" path="/var/lib/kubelet/pods/b19a1327-29e6-4354-bf31-ce295f5d758f/volumes" Feb 19 03:41:48.993235 master-0 kubenswrapper[33867]: I0219 03:41:48.991533 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmhc9\" (UniqueName: \"kubernetes.io/projected/b23c38ff-0149-4b73-a4dd-f6aae99512d0-kube-api-access-fmhc9\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:48.993235 master-0 kubenswrapper[33867]: I0219 03:41:48.991630 33867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:49.009630 master-0 kubenswrapper[33867]: I0219 03:41:49.009568 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b23c38ff-0149-4b73-a4dd-f6aae99512d0" (UID: "b23c38ff-0149-4b73-a4dd-f6aae99512d0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:49.022269 master-0 kubenswrapper[33867]: I0219 03:41:49.022193 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b23c38ff-0149-4b73-a4dd-f6aae99512d0" (UID: "b23c38ff-0149-4b73-a4dd-f6aae99512d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:49.022997 master-0 kubenswrapper[33867]: I0219 03:41:49.022919 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config" (OuterVolumeSpecName: "config") pod "b23c38ff-0149-4b73-a4dd-f6aae99512d0" (UID: "b23c38ff-0149-4b73-a4dd-f6aae99512d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:49.070890 master-0 kubenswrapper[33867]: I0219 03:41:49.070826 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-k2929"] Feb 19 03:41:49.070890 master-0 kubenswrapper[33867]: I0219 03:41:49.070883 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-360e-account-create-update-mwmgf"] Feb 19 03:41:49.070890 master-0 kubenswrapper[33867]: I0219 03:41:49.070897 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-74msg"] Feb 19 03:41:49.071173 master-0 kubenswrapper[33867]: I0219 03:41:49.070910 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b57897cc4-nd9ff"] Feb 19 03:41:49.071173 master-0 kubenswrapper[33867]: I0219 03:41:49.070923 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab43-account-create-update-jwqxb"] Feb 19 03:41:49.071173 master-0 kubenswrapper[33867]: I0219 03:41:49.070933 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054a4-api-0"] Feb 19 03:41:49.087903 master-0 kubenswrapper[33867]: I0219 03:41:49.087799 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-74msg" event={"ID":"765534b3-48eb-4db3-9413-fbe831f2bf9f","Type":"ContainerStarted","Data":"25b11dbb4ea9e58a7278c508e5ef5976eaabd680235dc7f8b8501e25a4698034"} Feb 19 03:41:49.089131 master-0 kubenswrapper[33867]: I0219 03:41:49.089091 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-k2929" event={"ID":"2afeaeae-53cb-4753-8240-ed7c0a892395","Type":"ContainerStarted","Data":"5276da5488497f630180117ff5f12f59bee3b1d35c1f313ce98442a345fa29b1"} Feb 19 03:41:49.093279 master-0 kubenswrapper[33867]: I0219 03:41:49.093190 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vv24r" event={"ID":"4e338259-396c-42e3-9a9d-235ec62fb521","Type":"ContainerStarted","Data":"93f83413366c1039f55ea99f594d1e549fbd73145017814b4ed13643845e69fd"} Feb 19 03:41:49.096788 master-0 kubenswrapper[33867]: I0219 03:41:49.096700 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:49.096935 master-0 kubenswrapper[33867]: I0219 03:41:49.096791 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:49.096935 master-0 kubenswrapper[33867]: I0219 03:41:49.096810 33867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b23c38ff-0149-4b73-a4dd-f6aae99512d0-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:49.108989 master-0 kubenswrapper[33867]: I0219 03:41:49.108784 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" event={"ID":"92a62e19-1f19-49fd-b843-eafb8bc78662","Type":"ContainerStarted","Data":"9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369"} Feb 19 03:41:49.116506 master-0 kubenswrapper[33867]: I0219 03:41:49.116354 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"da327fb4-7852-4866-bb8f-8b2930854e24","Type":"ContainerStarted","Data":"63305a06c5c71b7499093927ea463392f4a6a34bc33f3906c6e2bc06512b68ae"} Feb 19 03:41:49.121615 master-0 kubenswrapper[33867]: I0219 03:41:49.121553 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1db7-account-create-update-kprcb" event={"ID":"6ed5cbcb-0a9e-4561-b21e-0c84b806e725","Type":"ContainerStarted","Data":"fd4ea2ce7e017c73fd935e42c9bd8a522c021c01136234621865775c60ecaeb8"} Feb 19 03:41:49.126179 master-0 kubenswrapper[33867]: I0219 03:41:49.124130 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b57897cc4-nd9ff" event={"ID":"810cab61-d654-4926-a83f-51af67acafd0","Type":"ContainerStarted","Data":"3fa01d25c2c2cf7db82382d098101c5baa21f4f545ed23dcfadeba2e6f3bb046"} Feb 19 03:41:49.126582 master-0 kubenswrapper[33867]: I0219 03:41:49.126524 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" event={"ID":"de8fffe4-e342-4016-a543-c65edd216c52","Type":"ContainerStarted","Data":"935ca27dbbf42fcfd2ab344e435edfa45f39abc21eaadcba93aa2289ff861f26"} Feb 19 03:41:49.130550 master-0 kubenswrapper[33867]: I0219 03:41:49.130405 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bf57b44-qh2fj" event={"ID":"b23c38ff-0149-4b73-a4dd-f6aae99512d0","Type":"ContainerDied","Data":"2b03a2337cbb270548325068ce0823a9dd6ac89d2a86526ce6e114e6df4054c6"} Feb 19 03:41:49.130550 master-0 kubenswrapper[33867]: I0219 03:41:49.130457 33867 scope.go:117] "RemoveContainer" containerID="a14fd526c0f0bc6abd26f9706021df407bb2614e997ea965690fdeaef153bf7d" Feb 19 03:41:49.130757 master-0 kubenswrapper[33867]: I0219 03:41:49.130604 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bf57b44-qh2fj" Feb 19 03:41:49.242277 master-0 kubenswrapper[33867]: I0219 03:41:49.242196 33867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod87010165-a8cc-43e1-b9b6-af44f39f0c46"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod87010165-a8cc-43e1-b9b6-af44f39f0c46] : Timed out while waiting for systemd to remove kubepods-besteffort-pod87010165_a8cc_43e1_b9b6_af44f39f0c46.slice" Feb 19 03:41:49.255268 master-0 kubenswrapper[33867]: I0219 03:41:49.255189 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:49.293555 master-0 kubenswrapper[33867]: I0219 03:41:49.293130 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8bf57b44-qh2fj"] Feb 19 03:41:49.376464 master-0 kubenswrapper[33867]: I0219 03:41:49.374789 33867 scope.go:117] "RemoveContainer" containerID="b1a122d0f945bf5254ddc70fbcf28ed8ce928b8999ecb30e5f20bd8a2a10bc62" Feb 19 03:41:49.734394 master-0 kubenswrapper[33867]: I0219 03:41:49.734316 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b32481ce-ab7f-4b48-ba0c-f08c7bdb5b52\") pod \"glance-fa7ca-default-internal-api-0\" (UID: \"5f80387f-955e-4858-ad6b-fcfe3585e929\") " pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:49.819551 master-0 kubenswrapper[33867]: I0219 03:41:49.818794 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:49.958285 master-0 kubenswrapper[33867]: I0219 03:41:49.958198 33867 scope.go:117] "RemoveContainer" containerID="30e201eb6e611edd56a5696c6d0d894ac5ea380b8b60ea977450fa7b7c0e36b5" Feb 19 03:41:50.160757 master-0 kubenswrapper[33867]: I0219 03:41:50.160008 33867 generic.go:334] "Generic (PLEG): container finished" podID="765534b3-48eb-4db3-9413-fbe831f2bf9f" containerID="5371ab098b84ec475c9fadb1fa5f73ece91d9af7e61bfb520da553bd8c87c722" exitCode=0 Feb 19 03:41:50.161015 master-0 kubenswrapper[33867]: I0219 03:41:50.160123 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-74msg" event={"ID":"765534b3-48eb-4db3-9413-fbe831f2bf9f","Type":"ContainerDied","Data":"5371ab098b84ec475c9fadb1fa5f73ece91d9af7e61bfb520da553bd8c87c722"} Feb 19 03:41:50.163217 master-0 kubenswrapper[33867]: I0219 03:41:50.163159 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" event={"ID":"de8fffe4-e342-4016-a543-c65edd216c52","Type":"ContainerStarted","Data":"5aeebf88311f5d83c0bf9a90159f062702907d4c92ba9d7d4538b4971994e9bb"} Feb 19 03:41:50.175592 master-0 kubenswrapper[33867]: I0219 03:41:50.170480 33867 generic.go:334] "Generic (PLEG): container finished" podID="2afeaeae-53cb-4753-8240-ed7c0a892395" containerID="0000f4af2d1f1eb5dcf02bd517f26339cea715a82ade4523994de3a86922c3fe" exitCode=0 Feb 19 03:41:50.175592 master-0 kubenswrapper[33867]: I0219 03:41:50.170563 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-k2929" event={"ID":"2afeaeae-53cb-4753-8240-ed7c0a892395","Type":"ContainerDied","Data":"0000f4af2d1f1eb5dcf02bd517f26339cea715a82ade4523994de3a86922c3fe"} Feb 19 03:41:50.177422 master-0 kubenswrapper[33867]: I0219 03:41:50.177339 33867 generic.go:334] "Generic (PLEG): container finished" podID="4e338259-396c-42e3-9a9d-235ec62fb521" containerID="90ed69c5c72dcbda55a591257555aed331ced0e416d17dac84d097dc8e15aaed" exitCode=0 Feb 19 03:41:50.177422 master-0 kubenswrapper[33867]: I0219 03:41:50.177419 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vv24r" event={"ID":"4e338259-396c-42e3-9a9d-235ec62fb521","Type":"ContainerDied","Data":"90ed69c5c72dcbda55a591257555aed331ced0e416d17dac84d097dc8e15aaed"} Feb 19 03:41:50.183174 master-0 kubenswrapper[33867]: I0219 03:41:50.181450 33867 generic.go:334] "Generic (PLEG): container finished" podID="92a62e19-1f19-49fd-b843-eafb8bc78662" containerID="7205fa93093ff42d5c1fb033abaea407b8d51f70447e25e46434afc0b7cd08fa" exitCode=0 Feb 19 03:41:50.183174 master-0 kubenswrapper[33867]: I0219 03:41:50.181547 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" event={"ID":"92a62e19-1f19-49fd-b843-eafb8bc78662","Type":"ContainerDied","Data":"7205fa93093ff42d5c1fb033abaea407b8d51f70447e25e46434afc0b7cd08fa"} Feb 19 03:41:50.185118 master-0 kubenswrapper[33867]: I0219 03:41:50.185017 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1db7-account-create-update-kprcb" event={"ID":"6ed5cbcb-0a9e-4561-b21e-0c84b806e725","Type":"ContainerStarted","Data":"30d326181de156200152b9cb491c5899ec1eafd983b5686bc1f94e01869f0def"} Feb 19 03:41:50.194508 master-0 kubenswrapper[33867]: I0219 03:41:50.194443 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b57897cc4-nd9ff" event={"ID":"810cab61-d654-4926-a83f-51af67acafd0","Type":"ContainerStarted","Data":"4531bb202bfd41b107f9e0a8ba9cbf05a6329fc3e8ac74ec4cd40f6abe53bb88"} Feb 19 03:41:50.311742 master-0 kubenswrapper[33867]: I0219 03:41:50.311466 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" podStartSLOduration=5.311433707 podStartE2EDuration="5.311433707s" podCreationTimestamp="2026-02-19 03:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:50.276064895 +0000 UTC m=+1115.572735506" watchObservedRunningTime="2026-02-19 03:41:50.311433707 +0000 UTC m=+1115.608104318" Feb 19 03:41:50.320412 master-0 kubenswrapper[33867]: I0219 03:41:50.320237 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1db7-account-create-update-kprcb" podStartSLOduration=6.320215135 podStartE2EDuration="6.320215135s" podCreationTimestamp="2026-02-19 03:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:50.293092987 +0000 UTC m=+1115.589763598" watchObservedRunningTime="2026-02-19 03:41:50.320215135 +0000 UTC m=+1115.616885746" Feb 19 03:41:50.516461 master-0 kubenswrapper[33867]: I0219 03:41:50.516386 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-internal-api-0"] Feb 19 03:41:50.529536 master-0 kubenswrapper[33867]: W0219 03:41:50.529432 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f80387f_955e_4858_ad6b_fcfe3585e929.slice/crio-083163c581491fc5097717531ffbc4c097d4a45e6443414055a9bd820babe40c WatchSource:0}: Error finding container 083163c581491fc5097717531ffbc4c097d4a45e6443414055a9bd820babe40c: Status 404 returned error can't find the container with id 083163c581491fc5097717531ffbc4c097d4a45e6443414055a9bd820babe40c Feb 19 03:41:50.975940 master-0 kubenswrapper[33867]: I0219 03:41:50.975850 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" path="/var/lib/kubelet/pods/b23c38ff-0149-4b73-a4dd-f6aae99512d0/volumes" Feb 19 03:41:51.006657 master-0 kubenswrapper[33867]: I0219 03:41:51.006505 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b4cd943-1f61-4b27-8790-991add37bfec\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ecd78c-ac74-4718-8247-3c39e15bb140\") pod \"glance-fa7ca-default-external-api-0\" (UID: \"115b48b9-768e-4e24-ba50-2d47e507b21b\") " pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:51.021842 master-0 kubenswrapper[33867]: I0219 03:41:51.021787 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:41:51.135776 master-0 kubenswrapper[33867]: I0219 03:41:51.135706 33867 trace.go:236] Trace[116084420]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (19-Feb-2026 03:41:49.328) (total time: 1806ms): Feb 19 03:41:51.135776 master-0 kubenswrapper[33867]: Trace[116084420]: [1.806907409s] [1.806907409s] END Feb 19 03:41:51.252101 master-0 kubenswrapper[33867]: I0219 03:41:51.251978 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"5f80387f-955e-4858-ad6b-fcfe3585e929","Type":"ContainerStarted","Data":"083163c581491fc5097717531ffbc4c097d4a45e6443414055a9bd820babe40c"} Feb 19 03:41:51.256294 master-0 kubenswrapper[33867]: I0219 03:41:51.255478 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" event={"ID":"6a7f405f-ed33-4311-84a9-6aaf1fd4dadb","Type":"ContainerStarted","Data":"2a5419feb020e7ec1fa42412aa94501e86e79bc56a442975df18b137e01cc786"} Feb 19 03:41:51.257621 master-0 kubenswrapper[33867]: I0219 03:41:51.256864 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:51.266940 master-0 kubenswrapper[33867]: I0219 03:41:51.262523 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"da327fb4-7852-4866-bb8f-8b2930854e24","Type":"ContainerStarted","Data":"0bac161fd6ee64324498e34b2d30a69751d9d25044d5e711e5de98051847d41e"} Feb 19 03:41:51.269107 master-0 kubenswrapper[33867]: I0219 03:41:51.269021 33867 generic.go:334] "Generic (PLEG): container finished" podID="6ed5cbcb-0a9e-4561-b21e-0c84b806e725" containerID="30d326181de156200152b9cb491c5899ec1eafd983b5686bc1f94e01869f0def" exitCode=0 Feb 19 03:41:51.270319 master-0 kubenswrapper[33867]: I0219 03:41:51.269401 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1db7-account-create-update-kprcb" event={"ID":"6ed5cbcb-0a9e-4561-b21e-0c84b806e725","Type":"ContainerDied","Data":"30d326181de156200152b9cb491c5899ec1eafd983b5686bc1f94e01869f0def"} Feb 19 03:41:51.282087 master-0 kubenswrapper[33867]: I0219 03:41:51.282001 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b57897cc4-nd9ff" event={"ID":"810cab61-d654-4926-a83f-51af67acafd0","Type":"ContainerStarted","Data":"8b25b250e80f515f2ae78492ad28d94773f0942171f0d502b8a4a8e079f37d9f"} Feb 19 03:41:51.283965 master-0 kubenswrapper[33867]: I0219 03:41:51.283930 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:51.284024 master-0 kubenswrapper[33867]: I0219 03:41:51.283977 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:51.287801 master-0 kubenswrapper[33867]: I0219 03:41:51.287741 33867 generic.go:334] "Generic (PLEG): container finished" podID="c772151f-fa4c-44ae-8d31-3e53872c20e7" containerID="abaee50973a80a362a798731ce0802ec29104a410488ef8a45f9ffbf5fbb5e0d" exitCode=0 Feb 19 03:41:51.287887 master-0 kubenswrapper[33867]: I0219 03:41:51.287834 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nrrkp" event={"ID":"c772151f-fa4c-44ae-8d31-3e53872c20e7","Type":"ContainerDied","Data":"abaee50973a80a362a798731ce0802ec29104a410488ef8a45f9ffbf5fbb5e0d"} Feb 19 03:41:51.291583 master-0 kubenswrapper[33867]: I0219 03:41:51.291531 33867 generic.go:334] "Generic (PLEG): container finished" podID="de8fffe4-e342-4016-a543-c65edd216c52" containerID="5aeebf88311f5d83c0bf9a90159f062702907d4c92ba9d7d4538b4971994e9bb" exitCode=0 Feb 19 03:41:51.291644 master-0 kubenswrapper[33867]: I0219 03:41:51.291594 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" event={"ID":"de8fffe4-e342-4016-a543-c65edd216c52","Type":"ContainerDied","Data":"5aeebf88311f5d83c0bf9a90159f062702907d4c92ba9d7d4538b4971994e9bb"} Feb 19 03:41:51.417500 master-0 kubenswrapper[33867]: I0219 03:41:51.417423 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b57897cc4-nd9ff" podStartSLOduration=12.417405697 podStartE2EDuration="12.417405697s" podCreationTimestamp="2026-02-19 03:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:51.350294486 +0000 UTC m=+1116.646965097" watchObservedRunningTime="2026-02-19 03:41:51.417405697 +0000 UTC m=+1116.714076298" Feb 19 03:41:51.778368 master-0 kubenswrapper[33867]: I0219 03:41:51.772907 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa7ca-default-external-api-0"] Feb 19 03:41:52.013896 master-0 kubenswrapper[33867]: I0219 03:41:52.013430 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:52.150888 master-0 kubenswrapper[33867]: I0219 03:41:52.150805 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bww\" (UniqueName: \"kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww\") pod \"2afeaeae-53cb-4753-8240-ed7c0a892395\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " Feb 19 03:41:52.150888 master-0 kubenswrapper[33867]: I0219 03:41:52.150874 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts\") pod \"2afeaeae-53cb-4753-8240-ed7c0a892395\" (UID: \"2afeaeae-53cb-4753-8240-ed7c0a892395\") " Feb 19 03:41:52.152105 master-0 kubenswrapper[33867]: I0219 03:41:52.152077 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2afeaeae-53cb-4753-8240-ed7c0a892395" (UID: "2afeaeae-53cb-4753-8240-ed7c0a892395"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:52.161747 master-0 kubenswrapper[33867]: I0219 03:41:52.161668 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww" (OuterVolumeSpecName: "kube-api-access-k5bww") pod "2afeaeae-53cb-4753-8240-ed7c0a892395" (UID: "2afeaeae-53cb-4753-8240-ed7c0a892395"). InnerVolumeSpecName "kube-api-access-k5bww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:52.177827 master-0 kubenswrapper[33867]: I0219 03:41:52.175522 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:52.246654 master-0 kubenswrapper[33867]: I0219 03:41:52.246587 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:52.250280 master-0 kubenswrapper[33867]: I0219 03:41:52.250103 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:52.252984 master-0 kubenswrapper[33867]: I0219 03:41:52.252919 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts\") pod \"92a62e19-1f19-49fd-b843-eafb8bc78662\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " Feb 19 03:41:52.253695 master-0 kubenswrapper[33867]: I0219 03:41:52.253647 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92a62e19-1f19-49fd-b843-eafb8bc78662" (UID: "92a62e19-1f19-49fd-b843-eafb8bc78662"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:52.260125 master-0 kubenswrapper[33867]: I0219 03:41:52.258753 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbvpq\" (UniqueName: \"kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq\") pod \"92a62e19-1f19-49fd-b843-eafb8bc78662\" (UID: \"92a62e19-1f19-49fd-b843-eafb8bc78662\") " Feb 19 03:41:52.263153 master-0 kubenswrapper[33867]: I0219 03:41:52.262987 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq" (OuterVolumeSpecName: "kube-api-access-nbvpq") pod "92a62e19-1f19-49fd-b843-eafb8bc78662" (UID: "92a62e19-1f19-49fd-b843-eafb8bc78662"). InnerVolumeSpecName "kube-api-access-nbvpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:52.279452 master-0 kubenswrapper[33867]: I0219 03:41:52.278920 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92a62e19-1f19-49fd-b843-eafb8bc78662-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.279452 master-0 kubenswrapper[33867]: I0219 03:41:52.278989 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbvpq\" (UniqueName: \"kubernetes.io/projected/92a62e19-1f19-49fd-b843-eafb8bc78662-kube-api-access-nbvpq\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.279452 master-0 kubenswrapper[33867]: I0219 03:41:52.279009 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bww\" (UniqueName: \"kubernetes.io/projected/2afeaeae-53cb-4753-8240-ed7c0a892395-kube-api-access-k5bww\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.279452 master-0 kubenswrapper[33867]: I0219 03:41:52.279021 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afeaeae-53cb-4753-8240-ed7c0a892395-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.373405 master-0 kubenswrapper[33867]: I0219 03:41:52.372878 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054a4-api-0" event={"ID":"da327fb4-7852-4866-bb8f-8b2930854e24","Type":"ContainerStarted","Data":"4bf737523e18abadbca684fe88e4a7b41b04d2b986a0d48637b0d14634d4f5a4"} Feb 19 03:41:52.373405 master-0 kubenswrapper[33867]: I0219 03:41:52.373024 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-054a4-api-0" Feb 19 03:41:52.379306 master-0 kubenswrapper[33867]: I0219 03:41:52.378794 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-74msg" event={"ID":"765534b3-48eb-4db3-9413-fbe831f2bf9f","Type":"ContainerDied","Data":"25b11dbb4ea9e58a7278c508e5ef5976eaabd680235dc7f8b8501e25a4698034"} Feb 19 03:41:52.379306 master-0 kubenswrapper[33867]: I0219 03:41:52.378883 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b11dbb4ea9e58a7278c508e5ef5976eaabd680235dc7f8b8501e25a4698034" Feb 19 03:41:52.379306 master-0 kubenswrapper[33867]: I0219 03:41:52.378831 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-74msg" Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.379984 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l74vj\" (UniqueName: \"kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj\") pod \"4e338259-396c-42e3-9a9d-235ec62fb521\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.380141 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djcv7\" (UniqueName: \"kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7\") pod \"765534b3-48eb-4db3-9413-fbe831f2bf9f\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.380362 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts\") pod \"765534b3-48eb-4db3-9413-fbe831f2bf9f\" (UID: \"765534b3-48eb-4db3-9413-fbe831f2bf9f\") " Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.380822 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts\") pod \"4e338259-396c-42e3-9a9d-235ec62fb521\" (UID: \"4e338259-396c-42e3-9a9d-235ec62fb521\") " Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.381116 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "765534b3-48eb-4db3-9413-fbe831f2bf9f" (UID: "765534b3-48eb-4db3-9413-fbe831f2bf9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:52.381594 master-0 kubenswrapper[33867]: I0219 03:41:52.381519 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e338259-396c-42e3-9a9d-235ec62fb521" (UID: "4e338259-396c-42e3-9a9d-235ec62fb521"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:52.382937 master-0 kubenswrapper[33867]: I0219 03:41:52.382894 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e338259-396c-42e3-9a9d-235ec62fb521-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.382998 master-0 kubenswrapper[33867]: I0219 03:41:52.382939 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765534b3-48eb-4db3-9413-fbe831f2bf9f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.383034 master-0 kubenswrapper[33867]: I0219 03:41:52.383017 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-k2929" Feb 19 03:41:52.383112 master-0 kubenswrapper[33867]: I0219 03:41:52.383065 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-k2929" event={"ID":"2afeaeae-53cb-4753-8240-ed7c0a892395","Type":"ContainerDied","Data":"5276da5488497f630180117ff5f12f59bee3b1d35c1f313ce98442a345fa29b1"} Feb 19 03:41:52.383176 master-0 kubenswrapper[33867]: I0219 03:41:52.383125 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5276da5488497f630180117ff5f12f59bee3b1d35c1f313ce98442a345fa29b1" Feb 19 03:41:52.384210 master-0 kubenswrapper[33867]: I0219 03:41:52.384125 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7" (OuterVolumeSpecName: "kube-api-access-djcv7") pod "765534b3-48eb-4db3-9413-fbe831f2bf9f" (UID: "765534b3-48eb-4db3-9413-fbe831f2bf9f"). InnerVolumeSpecName "kube-api-access-djcv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:52.385030 master-0 kubenswrapper[33867]: I0219 03:41:52.384972 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj" (OuterVolumeSpecName: "kube-api-access-l74vj") pod "4e338259-396c-42e3-9a9d-235ec62fb521" (UID: "4e338259-396c-42e3-9a9d-235ec62fb521"). InnerVolumeSpecName "kube-api-access-l74vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:52.392411 master-0 kubenswrapper[33867]: I0219 03:41:52.392347 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"5f80387f-955e-4858-ad6b-fcfe3585e929","Type":"ContainerStarted","Data":"b3282a4739f32db244c6a222e0587cf838ba1695999cc6db552b6bf3b154a48d"} Feb 19 03:41:52.404016 master-0 kubenswrapper[33867]: I0219 03:41:52.398810 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vv24r" event={"ID":"4e338259-396c-42e3-9a9d-235ec62fb521","Type":"ContainerDied","Data":"93f83413366c1039f55ea99f594d1e549fbd73145017814b4ed13643845e69fd"} Feb 19 03:41:52.404016 master-0 kubenswrapper[33867]: I0219 03:41:52.398886 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f83413366c1039f55ea99f594d1e549fbd73145017814b4ed13643845e69fd" Feb 19 03:41:52.404016 master-0 kubenswrapper[33867]: I0219 03:41:52.398974 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vv24r" Feb 19 03:41:52.410865 master-0 kubenswrapper[33867]: I0219 03:41:52.406373 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"115b48b9-768e-4e24-ba50-2d47e507b21b","Type":"ContainerStarted","Data":"663e4d59905529f0a82a85a9065a55f825fa0e1647a2f7a29958de377a93ef49"} Feb 19 03:41:52.415393 master-0 kubenswrapper[33867]: I0219 03:41:52.414462 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" Feb 19 03:41:52.418594 master-0 kubenswrapper[33867]: I0219 03:41:52.418515 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab43-account-create-update-jwqxb" event={"ID":"92a62e19-1f19-49fd-b843-eafb8bc78662","Type":"ContainerDied","Data":"9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369"} Feb 19 03:41:52.418594 master-0 kubenswrapper[33867]: I0219 03:41:52.418594 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9502046c7bf56281651b33b2c65300f03141d754a2621a24c60abbe1f6af5369" Feb 19 03:41:52.419277 master-0 kubenswrapper[33867]: I0219 03:41:52.418995 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-054a4-api-0" podStartSLOduration=13.418974579 podStartE2EDuration="13.418974579s" podCreationTimestamp="2026-02-19 03:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:52.393516318 +0000 UTC m=+1117.690186929" watchObservedRunningTime="2026-02-19 03:41:52.418974579 +0000 UTC m=+1117.715645190" Feb 19 03:41:52.452629 master-0 kubenswrapper[33867]: I0219 03:41:52.452469 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fa7ca-default-internal-api-0" podStartSLOduration=4.452438646 podStartE2EDuration="4.452438646s" podCreationTimestamp="2026-02-19 03:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:52.433115359 +0000 UTC m=+1117.729785990" watchObservedRunningTime="2026-02-19 03:41:52.452438646 +0000 UTC m=+1117.749109257" Feb 19 03:41:52.489413 master-0 kubenswrapper[33867]: I0219 03:41:52.488622 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l74vj\" (UniqueName: \"kubernetes.io/projected/4e338259-396c-42e3-9a9d-235ec62fb521-kube-api-access-l74vj\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:52.489413 master-0 kubenswrapper[33867]: I0219 03:41:52.488679 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djcv7\" (UniqueName: \"kubernetes.io/projected/765534b3-48eb-4db3-9413-fbe831f2bf9f-kube-api-access-djcv7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.002642 master-0 kubenswrapper[33867]: I0219 03:41:53.002580 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129228 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129361 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129407 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129439 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nccdg\" (UniqueName: \"kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129468 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129495 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.129649 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo\") pod \"c772151f-fa4c-44ae-8d31-3e53872c20e7\" (UID: \"c772151f-fa4c-44ae-8d31-3e53872c20e7\") " Feb 19 03:41:53.134949 master-0 kubenswrapper[33867]: I0219 03:41:53.131303 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:53.135716 master-0 kubenswrapper[33867]: I0219 03:41:53.135394 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:41:53.138408 master-0 kubenswrapper[33867]: I0219 03:41:53.138278 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg" (OuterVolumeSpecName: "kube-api-access-nccdg") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "kube-api-access-nccdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:53.141014 master-0 kubenswrapper[33867]: I0219 03:41:53.140950 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 03:41:53.141542 master-0 kubenswrapper[33867]: I0219 03:41:53.141500 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts" (OuterVolumeSpecName: "scripts") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:53.169681 master-0 kubenswrapper[33867]: I0219 03:41:53.169566 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:53.201737 master-0 kubenswrapper[33867]: I0219 03:41:53.200400 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config" (OuterVolumeSpecName: "config") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.233018 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2ndf\" (UniqueName: \"kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf\") pod \"de8fffe4-e342-4016-a543-c65edd216c52\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.233135 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts\") pod \"de8fffe4-e342-4016-a543-c65edd216c52\" (UID: \"de8fffe4-e342-4016-a543-c65edd216c52\") " Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.233673 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c772151f-fa4c-44ae-8d31-3e53872c20e7" (UID: "c772151f-fa4c-44ae-8d31-3e53872c20e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234428 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234444 33867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c772151f-fa4c-44ae-8d31-3e53872c20e7-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234454 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234463 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234471 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c772151f-fa4c-44ae-8d31-3e53872c20e7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234483 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nccdg\" (UniqueName: \"kubernetes.io/projected/c772151f-fa4c-44ae-8d31-3e53872c20e7-kube-api-access-nccdg\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.234492 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c772151f-fa4c-44ae-8d31-3e53872c20e7-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.240282 master-0 kubenswrapper[33867]: I0219 03:41:53.235007 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de8fffe4-e342-4016-a543-c65edd216c52" (UID: "de8fffe4-e342-4016-a543-c65edd216c52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:53.268283 master-0 kubenswrapper[33867]: I0219 03:41:53.261739 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:53.297359 master-0 kubenswrapper[33867]: I0219 03:41:53.296597 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf" (OuterVolumeSpecName: "kube-api-access-x2ndf") pod "de8fffe4-e342-4016-a543-c65edd216c52" (UID: "de8fffe4-e342-4016-a543-c65edd216c52"). InnerVolumeSpecName "kube-api-access-x2ndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:53.336556 master-0 kubenswrapper[33867]: I0219 03:41:53.336496 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts\") pod \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " Feb 19 03:41:53.336919 master-0 kubenswrapper[33867]: I0219 03:41:53.336894 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzgms\" (UniqueName: \"kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms\") pod \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\" (UID: \"6ed5cbcb-0a9e-4561-b21e-0c84b806e725\") " Feb 19 03:41:53.337483 master-0 kubenswrapper[33867]: I0219 03:41:53.337457 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2ndf\" (UniqueName: \"kubernetes.io/projected/de8fffe4-e342-4016-a543-c65edd216c52-kube-api-access-x2ndf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.337483 master-0 kubenswrapper[33867]: I0219 03:41:53.337479 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8fffe4-e342-4016-a543-c65edd216c52-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.352289 master-0 kubenswrapper[33867]: I0219 03:41:53.351159 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ed5cbcb-0a9e-4561-b21e-0c84b806e725" (UID: "6ed5cbcb-0a9e-4561-b21e-0c84b806e725"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:41:53.365284 master-0 kubenswrapper[33867]: I0219 03:41:53.357515 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms" (OuterVolumeSpecName: "kube-api-access-pzgms") pod "6ed5cbcb-0a9e-4561-b21e-0c84b806e725" (UID: "6ed5cbcb-0a9e-4561-b21e-0c84b806e725"). InnerVolumeSpecName "kube-api-access-pzgms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:41:53.446527 master-0 kubenswrapper[33867]: I0219 03:41:53.441447 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzgms\" (UniqueName: \"kubernetes.io/projected/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-kube-api-access-pzgms\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.446527 master-0 kubenswrapper[33867]: I0219 03:41:53.441516 33867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ed5cbcb-0a9e-4561-b21e-0c84b806e725-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:41:53.505132 master-0 kubenswrapper[33867]: I0219 03:41:53.505042 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" event={"ID":"de8fffe4-e342-4016-a543-c65edd216c52","Type":"ContainerDied","Data":"935ca27dbbf42fcfd2ab344e435edfa45f39abc21eaadcba93aa2289ff861f26"} Feb 19 03:41:53.506281 master-0 kubenswrapper[33867]: I0219 03:41:53.506205 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-360e-account-create-update-mwmgf" Feb 19 03:41:53.506927 master-0 kubenswrapper[33867]: I0219 03:41:53.506811 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935ca27dbbf42fcfd2ab344e435edfa45f39abc21eaadcba93aa2289ff861f26" Feb 19 03:41:53.512592 master-0 kubenswrapper[33867]: I0219 03:41:53.512516 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-internal-api-0" event={"ID":"5f80387f-955e-4858-ad6b-fcfe3585e929","Type":"ContainerStarted","Data":"4271340c6b7105e5e162409fafe1f1c67be423427b0654abe2c044bc7dae0360"} Feb 19 03:41:53.521743 master-0 kubenswrapper[33867]: I0219 03:41:53.521672 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"115b48b9-768e-4e24-ba50-2d47e507b21b","Type":"ContainerStarted","Data":"7b306433754ce01a567aeacff03333520ddf33d2149c7f812126e99641f6f6d5"} Feb 19 03:41:53.537695 master-0 kubenswrapper[33867]: I0219 03:41:53.537629 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1db7-account-create-update-kprcb" event={"ID":"6ed5cbcb-0a9e-4561-b21e-0c84b806e725","Type":"ContainerDied","Data":"fd4ea2ce7e017c73fd935e42c9bd8a522c021c01136234621865775c60ecaeb8"} Feb 19 03:41:53.537695 master-0 kubenswrapper[33867]: I0219 03:41:53.537694 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd4ea2ce7e017c73fd935e42c9bd8a522c021c01136234621865775c60ecaeb8" Feb 19 03:41:53.537919 master-0 kubenswrapper[33867]: I0219 03:41:53.537773 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1db7-account-create-update-kprcb" Feb 19 03:41:53.545924 master-0 kubenswrapper[33867]: I0219 03:41:53.545858 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-64cdd9cf48-dg7ws" Feb 19 03:41:53.551866 master-0 kubenswrapper[33867]: I0219 03:41:53.551798 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nrrkp" event={"ID":"c772151f-fa4c-44ae-8d31-3e53872c20e7","Type":"ContainerDied","Data":"4bb76404666bfc6a3c94550fe9b7590042fb09a5ad0fc3bef49002b902f3bc14"} Feb 19 03:41:53.551866 master-0 kubenswrapper[33867]: I0219 03:41:53.551857 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb76404666bfc6a3c94550fe9b7590042fb09a5ad0fc3bef49002b902f3bc14" Feb 19 03:41:53.552213 master-0 kubenswrapper[33867]: I0219 03:41:53.551988 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nrrkp" Feb 19 03:41:53.841173 master-0 kubenswrapper[33867]: E0219 03:41:53.841071 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed5cbcb_0a9e_4561_b21e_0c84b806e725.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde8fffe4_e342_4016_a543_c65edd216c52.slice/crio-935ca27dbbf42fcfd2ab344e435edfa45f39abc21eaadcba93aa2289ff861f26\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed5cbcb_0a9e_4561_b21e_0c84b806e725.slice/crio-fd4ea2ce7e017c73fd935e42c9bd8a522c021c01136234621865775c60ecaeb8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde8fffe4_e342_4016_a543_c65edd216c52.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:41:54.585287 master-0 kubenswrapper[33867]: I0219 03:41:54.584402 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa7ca-default-external-api-0" event={"ID":"115b48b9-768e-4e24-ba50-2d47e507b21b","Type":"ContainerStarted","Data":"8d3d2050d3055d4a6db313dbba669ea5d0737be1d4e1ed344bba84263411412e"} Feb 19 03:41:54.672280 master-0 kubenswrapper[33867]: I0219 03:41:54.671723 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fa7ca-default-external-api-0" podStartSLOduration=6.671698933 podStartE2EDuration="6.671698933s" podCreationTimestamp="2026-02-19 03:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:41:54.661795923 +0000 UTC m=+1119.958466534" watchObservedRunningTime="2026-02-19 03:41:54.671698933 +0000 UTC m=+1119.968369544" Feb 19 03:41:54.952379 master-0 kubenswrapper[33867]: I0219 03:41:54.951570 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:41:55.390016 master-0 kubenswrapper[33867]: I0219 03:41:55.389964 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:55.392629 master-0 kubenswrapper[33867]: I0219 03:41:55.392124 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659db66d4-26vz9" Feb 19 03:41:55.502454 master-0 kubenswrapper[33867]: I0219 03:41:55.502360 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nt89l"] Feb 19 03:41:55.503142 master-0 kubenswrapper[33867]: E0219 03:41:55.503099 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a62e19-1f19-49fd-b843-eafb8bc78662" containerName="mariadb-account-create-update" Feb 19 03:41:55.503142 master-0 kubenswrapper[33867]: I0219 03:41:55.503129 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a62e19-1f19-49fd-b843-eafb8bc78662" containerName="mariadb-account-create-update" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503167 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-api" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503180 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-api" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503207 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed5cbcb-0a9e-4561-b21e-0c84b806e725" containerName="mariadb-account-create-update" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503216 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed5cbcb-0a9e-4561-b21e-0c84b806e725" containerName="mariadb-account-create-update" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503237 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afeaeae-53cb-4753-8240-ed7c0a892395" containerName="mariadb-database-create" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503245 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afeaeae-53cb-4753-8240-ed7c0a892395" containerName="mariadb-database-create" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503286 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e338259-396c-42e3-9a9d-235ec62fb521" containerName="mariadb-database-create" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503298 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e338259-396c-42e3-9a9d-235ec62fb521" containerName="mariadb-database-create" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503313 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8fffe4-e342-4016-a543-c65edd216c52" containerName="mariadb-account-create-update" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503322 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8fffe4-e342-4016-a543-c65edd216c52" containerName="mariadb-account-create-update" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503354 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c772151f-fa4c-44ae-8d31-3e53872c20e7" containerName="ironic-inspector-db-sync" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503361 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772151f-fa4c-44ae-8d31-3e53872c20e7" containerName="ironic-inspector-db-sync" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: E0219 03:41:55.503374 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765534b3-48eb-4db3-9413-fbe831f2bf9f" containerName="mariadb-database-create" Feb 19 03:41:55.503372 master-0 kubenswrapper[33867]: I0219 03:41:55.503382 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="765534b3-48eb-4db3-9413-fbe831f2bf9f" containerName="mariadb-database-create" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: E0219 03:41:55.503415 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-httpd" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503425 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-httpd" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503746 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed5cbcb-0a9e-4561-b21e-0c84b806e725" containerName="mariadb-account-create-update" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503792 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-api" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503823 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e338259-396c-42e3-9a9d-235ec62fb521" containerName="mariadb-database-create" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503858 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a62e19-1f19-49fd-b843-eafb8bc78662" containerName="mariadb-account-create-update" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503874 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="765534b3-48eb-4db3-9413-fbe831f2bf9f" containerName="mariadb-database-create" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503889 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afeaeae-53cb-4753-8240-ed7c0a892395" containerName="mariadb-database-create" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503898 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23c38ff-0149-4b73-a4dd-f6aae99512d0" containerName="neutron-httpd" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503912 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c772151f-fa4c-44ae-8d31-3e53872c20e7" containerName="ironic-inspector-db-sync" Feb 19 03:41:55.504305 master-0 kubenswrapper[33867]: I0219 03:41:55.503933 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8fffe4-e342-4016-a543-c65edd216c52" containerName="mariadb-account-create-update" Feb 19 03:41:55.505077 master-0 kubenswrapper[33867]: I0219 03:41:55.505022 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.514229 master-0 kubenswrapper[33867]: I0219 03:41:55.513872 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 03:41:55.515625 master-0 kubenswrapper[33867]: I0219 03:41:55.514219 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 03:41:55.515625 master-0 kubenswrapper[33867]: I0219 03:41:55.514483 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nt89l"] Feb 19 03:41:55.550584 master-0 kubenswrapper[33867]: I0219 03:41:55.546967 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.550584 master-0 kubenswrapper[33867]: I0219 03:41:55.547054 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.550584 master-0 kubenswrapper[33867]: I0219 03:41:55.547163 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46p22\" (UniqueName: \"kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.550584 master-0 kubenswrapper[33867]: I0219 03:41:55.547270 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.630237 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.630684 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-854445f596-6p84s" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-log" containerID="cri-o://fbde71e6414ff31ce5a67a30596b643ba032d9af6b93137b258f3cae7fe4b717" gracePeriod=30 Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.630794 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-854445f596-6p84s" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-api" containerID="cri-o://a15c4d58c92606ea1aafe4e9b79b0ceb640bf767a1473919f174d4180301e579" gracePeriod=30 Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.649606 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46p22\" (UniqueName: \"kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.649728 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.649848 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.649869 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.654605 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.662891 master-0 kubenswrapper[33867]: I0219 03:41:55.660110 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.672450 master-0 kubenswrapper[33867]: I0219 03:41:55.664440 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.696442 master-0 kubenswrapper[33867]: I0219 03:41:55.695084 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46p22\" (UniqueName: \"kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22\") pod \"nova-cell0-conductor-db-sync-nt89l\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.844691 master-0 kubenswrapper[33867]: I0219 03:41:55.844621 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:41:55.991886 master-0 kubenswrapper[33867]: I0219 03:41:55.991803 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:41:55.998734 master-0 kubenswrapper[33867]: I0219 03:41:55.998654 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.012666 master-0 kubenswrapper[33867]: I0219 03:41:56.010246 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:41:56.113169 master-0 kubenswrapper[33867]: I0219 03:41:56.113079 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.113466 master-0 kubenswrapper[33867]: I0219 03:41:56.113205 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.116745 master-0 kubenswrapper[33867]: I0219 03:41:56.115325 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.124448 master-0 kubenswrapper[33867]: I0219 03:41:56.123867 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjjm\" (UniqueName: \"kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.124448 master-0 kubenswrapper[33867]: I0219 03:41:56.124068 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.124448 master-0 kubenswrapper[33867]: I0219 03:41:56.124210 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.182873 master-0 kubenswrapper[33867]: I0219 03:41:56.182210 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:41:56.199055 master-0 kubenswrapper[33867]: I0219 03:41:56.198997 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:41:56.204251 master-0 kubenswrapper[33867]: I0219 03:41:56.204191 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 19 03:41:56.204484 master-0 kubenswrapper[33867]: I0219 03:41:56.204365 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 19 03:41:56.204736 master-0 kubenswrapper[33867]: I0219 03:41:56.204666 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 19 03:41:56.205744 master-0 kubenswrapper[33867]: I0219 03:41:56.205709 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227208 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227314 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227344 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227374 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227416 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttj79\" (UniqueName: \"kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227445 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227479 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227518 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227601 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227657 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227724 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjjm\" (UniqueName: \"kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227751 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.228850 master-0 kubenswrapper[33867]: I0219 03:41:56.227816 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.230030 master-0 kubenswrapper[33867]: I0219 03:41:56.228924 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.230030 master-0 kubenswrapper[33867]: I0219 03:41:56.229761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.230599 master-0 kubenswrapper[33867]: I0219 03:41:56.230560 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.231492 master-0 kubenswrapper[33867]: I0219 03:41:56.231451 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.244723 master-0 kubenswrapper[33867]: I0219 03:41:56.235409 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.269997 master-0 kubenswrapper[33867]: I0219 03:41:56.266623 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjjm\" (UniqueName: \"kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm\") pod \"dnsmasq-dns-766d44d5cc-hz6f7\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.337438 master-0 kubenswrapper[33867]: I0219 03:41:56.337199 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337438 master-0 kubenswrapper[33867]: I0219 03:41:56.337297 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337438 master-0 kubenswrapper[33867]: I0219 03:41:56.337402 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337808 master-0 kubenswrapper[33867]: I0219 03:41:56.337505 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337808 master-0 kubenswrapper[33867]: I0219 03:41:56.337546 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttj79\" (UniqueName: \"kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337808 master-0 kubenswrapper[33867]: I0219 03:41:56.337581 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.337808 master-0 kubenswrapper[33867]: I0219 03:41:56.337636 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.338165 master-0 kubenswrapper[33867]: I0219 03:41:56.338140 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.349919 master-0 kubenswrapper[33867]: I0219 03:41:56.349851 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.357306 master-0 kubenswrapper[33867]: I0219 03:41:56.354194 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.357578 master-0 kubenswrapper[33867]: I0219 03:41:56.357407 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.368682 master-0 kubenswrapper[33867]: I0219 03:41:56.368592 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.370430 master-0 kubenswrapper[33867]: I0219 03:41:56.370306 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.371568 master-0 kubenswrapper[33867]: I0219 03:41:56.371530 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttj79\" (UniqueName: \"kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79\") pod \"ironic-inspector-0\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " pod="openstack/ironic-inspector-0" Feb 19 03:41:56.373411 master-0 kubenswrapper[33867]: I0219 03:41:56.373348 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:41:56.538508 master-0 kubenswrapper[33867]: I0219 03:41:56.537357 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:41:56.639444 master-0 kubenswrapper[33867]: I0219 03:41:56.639361 33867 generic.go:334] "Generic (PLEG): container finished" podID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerID="fbde71e6414ff31ce5a67a30596b643ba032d9af6b93137b258f3cae7fe4b717" exitCode=143 Feb 19 03:41:56.639444 master-0 kubenswrapper[33867]: I0219 03:41:56.639434 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerDied","Data":"fbde71e6414ff31ce5a67a30596b643ba032d9af6b93137b258f3cae7fe4b717"} Feb 19 03:41:59.182867 master-0 kubenswrapper[33867]: I0219 03:41:59.182649 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:41:59.729066 master-0 kubenswrapper[33867]: I0219 03:41:59.728725 33867 generic.go:334] "Generic (PLEG): container finished" podID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerID="a15c4d58c92606ea1aafe4e9b79b0ceb640bf767a1473919f174d4180301e579" exitCode=0 Feb 19 03:41:59.729066 master-0 kubenswrapper[33867]: I0219 03:41:59.728781 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerDied","Data":"a15c4d58c92606ea1aafe4e9b79b0ceb640bf767a1473919f174d4180301e579"} Feb 19 03:41:59.828281 master-0 kubenswrapper[33867]: I0219 03:41:59.825647 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:59.828281 master-0 kubenswrapper[33867]: I0219 03:41:59.826230 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:59.887603 master-0 kubenswrapper[33867]: I0219 03:41:59.887539 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:41:59.975500 master-0 kubenswrapper[33867]: I0219 03:41:59.975405 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:42:00.030001 master-0 kubenswrapper[33867]: I0219 03:42:00.029721 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b57897cc4-nd9ff" Feb 19 03:42:00.745298 master-0 kubenswrapper[33867]: I0219 03:42:00.742890 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:42:00.745298 master-0 kubenswrapper[33867]: I0219 03:42:00.742965 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:42:01.024416 master-0 kubenswrapper[33867]: I0219 03:42:01.022701 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.024416 master-0 kubenswrapper[33867]: I0219 03:42:01.022776 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.079803 master-0 kubenswrapper[33867]: I0219 03:42:01.079733 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.096960 master-0 kubenswrapper[33867]: I0219 03:42:01.096674 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.441201 master-0 kubenswrapper[33867]: I0219 03:42:01.441153 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-854445f596-6p84s" Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556336 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556422 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh5qg\" (UniqueName: \"kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556510 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556787 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556825 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.556896 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.557005 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data\") pod \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\" (UID: \"6551788d-2eaa-4ed9-ac31-7f5e9edccf42\") " Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.565144 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts" (OuterVolumeSpecName: "scripts") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.565338 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg" (OuterVolumeSpecName: "kube-api-access-lh5qg") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "kube-api-access-lh5qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:42:01.566654 master-0 kubenswrapper[33867]: I0219 03:42:01.565794 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs" (OuterVolumeSpecName: "logs") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:42:01.671059 master-0 kubenswrapper[33867]: I0219 03:42:01.670905 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.671059 master-0 kubenswrapper[33867]: I0219 03:42:01.670975 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh5qg\" (UniqueName: \"kubernetes.io/projected/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-kube-api-access-lh5qg\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.671059 master-0 kubenswrapper[33867]: I0219 03:42:01.670987 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.684052 master-0 kubenswrapper[33867]: I0219 03:42:01.683313 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data" (OuterVolumeSpecName: "config-data") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:01.703650 master-0 kubenswrapper[33867]: I0219 03:42:01.703378 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nt89l"] Feb 19 03:42:01.712276 master-0 kubenswrapper[33867]: I0219 03:42:01.711967 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:01.774600 master-0 kubenswrapper[33867]: I0219 03:42:01.774537 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.774600 master-0 kubenswrapper[33867]: I0219 03:42:01.774600 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.787293 master-0 kubenswrapper[33867]: I0219 03:42:01.784408 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:42:01.799264 master-0 kubenswrapper[33867]: I0219 03:42:01.799161 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nt89l" event={"ID":"89845d0a-587f-448f-802a-16572691093c","Type":"ContainerStarted","Data":"5eb8d1ffb84a560cb4d2841c232a9d9774b511afb522a47dc95a686f4dfead17"} Feb 19 03:42:01.802166 master-0 kubenswrapper[33867]: I0219 03:42:01.802109 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-854445f596-6p84s" event={"ID":"6551788d-2eaa-4ed9-ac31-7f5e9edccf42","Type":"ContainerDied","Data":"d45dfa625371938ef7aa2fe76d908b9585173cac7afaea948065a4630f80e583"} Feb 19 03:42:01.802166 master-0 kubenswrapper[33867]: I0219 03:42:01.802156 33867 scope.go:117] "RemoveContainer" containerID="a15c4d58c92606ea1aafe4e9b79b0ceb640bf767a1473919f174d4180301e579" Feb 19 03:42:01.802421 master-0 kubenswrapper[33867]: I0219 03:42:01.802348 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-854445f596-6p84s" Feb 19 03:42:01.824567 master-0 kubenswrapper[33867]: I0219 03:42:01.822996 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" event={"ID":"50b1a298-c4b0-4cfd-aa2a-163668bef18f","Type":"ContainerStarted","Data":"ae2a38f0aa3299d91eabdebf07a7550253cb3be951a4327b025e5db96abb3eed"} Feb 19 03:42:01.824567 master-0 kubenswrapper[33867]: I0219 03:42:01.823080 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.824567 master-0 kubenswrapper[33867]: I0219 03:42:01.823100 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:01.861735 master-0 kubenswrapper[33867]: I0219 03:42:01.861632 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:01.880272 master-0 kubenswrapper[33867]: I0219 03:42:01.880185 33867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:01.905732 master-0 kubenswrapper[33867]: I0219 03:42:01.905409 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6551788d-2eaa-4ed9-ac31-7f5e9edccf42" (UID: "6551788d-2eaa-4ed9-ac31-7f5e9edccf42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:01.915665 master-0 kubenswrapper[33867]: I0219 03:42:01.913299 33867 scope.go:117] "RemoveContainer" containerID="fbde71e6414ff31ce5a67a30596b643ba032d9af6b93137b258f3cae7fe4b717" Feb 19 03:42:01.987295 master-0 kubenswrapper[33867]: I0219 03:42:01.984400 33867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6551788d-2eaa-4ed9-ac31-7f5e9edccf42-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:02.743234 master-0 kubenswrapper[33867]: I0219 03:42:02.743139 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-054a4-api-0" Feb 19 03:42:02.857065 master-0 kubenswrapper[33867]: I0219 03:42:02.856937 33867 generic.go:334] "Generic (PLEG): container finished" podID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerID="3afb1359d6c50a81f3b0ff38176c94627b35b68dc2fc6b78dd9e20ed0faa36b4" exitCode=0 Feb 19 03:42:02.857065 master-0 kubenswrapper[33867]: I0219 03:42:02.857035 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" event={"ID":"50b1a298-c4b0-4cfd-aa2a-163668bef18f","Type":"ContainerDied","Data":"3afb1359d6c50a81f3b0ff38176c94627b35b68dc2fc6b78dd9e20ed0faa36b4"} Feb 19 03:42:02.876505 master-0 kubenswrapper[33867]: I0219 03:42:02.876450 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"6268b4ee022718f2d71c2e6dda861d4c69ac3866c1ba3a66b3efe63aa41c28c6"} Feb 19 03:42:03.001416 master-0 kubenswrapper[33867]: I0219 03:42:03.000435 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:42:03.019142 master-0 kubenswrapper[33867]: I0219 03:42:03.019053 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-854445f596-6p84s"] Feb 19 03:42:03.082035 master-0 kubenswrapper[33867]: I0219 03:42:03.081990 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:42:03.082668 master-0 kubenswrapper[33867]: I0219 03:42:03.082654 33867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:42:03.105319 master-0 kubenswrapper[33867]: I0219 03:42:03.105251 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:03.186944 master-0 kubenswrapper[33867]: I0219 03:42:03.186832 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-internal-api-0" Feb 19 03:42:06.352495 master-0 kubenswrapper[33867]: I0219 03:42:06.352397 33867 generic.go:334] "Generic (PLEG): container finished" podID="5f430678-69fa-4db6-a341-fbd2c75c7a2f" containerID="79007bc81ee32495264514c2dfbc4a3f6ef417bce308f0083618c09f578bcb77" exitCode=0 Feb 19 03:42:06.379251 master-0 kubenswrapper[33867]: I0219 03:42:06.379107 33867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:42:06.379251 master-0 kubenswrapper[33867]: I0219 03:42:06.379138 33867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 03:42:06.404442 master-0 kubenswrapper[33867]: I0219 03:42:06.404303 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" path="/var/lib/kubelet/pods/6551788d-2eaa-4ed9-ac31-7f5e9edccf42/volumes" Feb 19 03:42:06.407797 master-0 kubenswrapper[33867]: I0219 03:42:06.406050 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" event={"ID":"50b1a298-c4b0-4cfd-aa2a-163668bef18f","Type":"ContainerStarted","Data":"8a0072a9c94ef7f160b1700f6d08782b0b92352747719c7b36acb128c24987d7"} Feb 19 03:42:06.407797 master-0 kubenswrapper[33867]: I0219 03:42:06.406100 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5f430678-69fa-4db6-a341-fbd2c75c7a2f","Type":"ContainerDied","Data":"79007bc81ee32495264514c2dfbc4a3f6ef417bce308f0083618c09f578bcb77"} Feb 19 03:42:06.407797 master-0 kubenswrapper[33867]: I0219 03:42:06.406121 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5f430678-69fa-4db6-a341-fbd2c75c7a2f","Type":"ContainerStarted","Data":"629c074f80ea3064137b8b625298b772e1f583a0eb1c797985c44350a552bc1d"} Feb 19 03:42:06.445039 master-0 kubenswrapper[33867]: I0219 03:42:06.444912 33867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-054a4-api-0" podUID="da327fb4-7852-4866-bb8f-8b2930854e24" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.248:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:42:06.560380 master-0 kubenswrapper[33867]: I0219 03:42:06.554875 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" podStartSLOduration=11.554850717 podStartE2EDuration="11.554850717s" podCreationTimestamp="2026-02-19 03:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:42:06.518021333 +0000 UTC m=+1131.814691944" watchObservedRunningTime="2026-02-19 03:42:06.554850717 +0000 UTC m=+1131.851521328" Feb 19 03:42:06.577860 master-0 kubenswrapper[33867]: I0219 03:42:06.574156 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:06.577860 master-0 kubenswrapper[33867]: I0219 03:42:06.574221 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-fa7ca-default-external-api-0" Feb 19 03:42:07.263976 master-0 kubenswrapper[33867]: I0219 03:42:07.263722 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:42:07.312716 master-0 kubenswrapper[33867]: I0219 03:42:07.312647 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.312951 master-0 kubenswrapper[33867]: I0219 03:42:07.312849 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttj79\" (UniqueName: \"kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.312951 master-0 kubenswrapper[33867]: I0219 03:42:07.312879 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.313024 master-0 kubenswrapper[33867]: I0219 03:42:07.312950 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.313024 master-0 kubenswrapper[33867]: I0219 03:42:07.313015 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.313092 master-0 kubenswrapper[33867]: I0219 03:42:07.313055 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.313271 master-0 kubenswrapper[33867]: I0219 03:42:07.313220 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle\") pod \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\" (UID: \"5f430678-69fa-4db6-a341-fbd2c75c7a2f\") " Feb 19 03:42:07.313981 master-0 kubenswrapper[33867]: I0219 03:42:07.313894 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:42:07.318231 master-0 kubenswrapper[33867]: I0219 03:42:07.317027 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79" (OuterVolumeSpecName: "kube-api-access-ttj79") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "kube-api-access-ttj79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:42:07.318231 master-0 kubenswrapper[33867]: I0219 03:42:07.317327 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:42:07.319562 master-0 kubenswrapper[33867]: I0219 03:42:07.319491 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts" (OuterVolumeSpecName: "scripts") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:07.322462 master-0 kubenswrapper[33867]: I0219 03:42:07.322382 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 03:42:07.325414 master-0 kubenswrapper[33867]: I0219 03:42:07.325233 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config" (OuterVolumeSpecName: "config") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:07.369473 master-0 kubenswrapper[33867]: I0219 03:42:07.369407 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f430678-69fa-4db6-a341-fbd2c75c7a2f" (UID: "5f430678-69fa-4db6-a341-fbd2c75c7a2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:07.370413 master-0 kubenswrapper[33867]: I0219 03:42:07.370349 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5f430678-69fa-4db6-a341-fbd2c75c7a2f","Type":"ContainerDied","Data":"629c074f80ea3064137b8b625298b772e1f583a0eb1c797985c44350a552bc1d"} Feb 19 03:42:07.370480 master-0 kubenswrapper[33867]: I0219 03:42:07.370440 33867 scope.go:117] "RemoveContainer" containerID="79007bc81ee32495264514c2dfbc4a3f6ef417bce308f0083618c09f578bcb77" Feb 19 03:42:07.370718 master-0 kubenswrapper[33867]: I0219 03:42:07.370682 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:42:07.371896 master-0 kubenswrapper[33867]: I0219 03:42:07.371856 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416590 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416654 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416672 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416684 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f430678-69fa-4db6-a341-fbd2c75c7a2f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416697 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttj79\" (UniqueName: \"kubernetes.io/projected/5f430678-69fa-4db6-a341-fbd2c75c7a2f-kube-api-access-ttj79\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416708 33867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5f430678-69fa-4db6-a341-fbd2c75c7a2f-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.417325 master-0 kubenswrapper[33867]: I0219 03:42:07.416721 33867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5f430678-69fa-4db6-a341-fbd2c75c7a2f-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:07.554424 master-0 kubenswrapper[33867]: I0219 03:42:07.554335 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:07.564823 master-0 kubenswrapper[33867]: I0219 03:42:07.564742 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.610811 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: E0219 03:42:07.611432 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-api" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611450 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-api" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: E0219 03:42:07.611471 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-log" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611477 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-log" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: E0219 03:42:07.611512 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f430678-69fa-4db6-a341-fbd2c75c7a2f" containerName="ironic-python-agent-init" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611519 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f430678-69fa-4db6-a341-fbd2c75c7a2f" containerName="ironic-python-agent-init" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611743 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f430678-69fa-4db6-a341-fbd2c75c7a2f" containerName="ironic-python-agent-init" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611773 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-api" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.611798 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6551788d-2eaa-4ed9-ac31-7f5e9edccf42" containerName="placement-log" Feb 19 03:42:10.675659 master-0 kubenswrapper[33867]: I0219 03:42:07.616891 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:42:10.751943 master-0 kubenswrapper[33867]: I0219 03:42:10.751724 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 19 03:42:10.752227 master-0 kubenswrapper[33867]: I0219 03:42:10.752060 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.759878 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.760166 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.761275 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764356 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764445 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764497 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764633 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-scripts\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764747 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-config\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764847 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.764973 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkp2k\" (UniqueName: \"kubernetes.io/projected/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-kube-api-access-qkp2k\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.765048 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.786452 master-0 kubenswrapper[33867]: I0219 03:42:10.765131 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.842292 master-0 kubenswrapper[33867]: I0219 03:42:10.840244 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f430678-69fa-4db6-a341-fbd2c75c7a2f" path="/var/lib/kubelet/pods/5f430678-69fa-4db6-a341-fbd2c75c7a2f/volumes" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867127 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-config\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867242 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867427 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkp2k\" (UniqueName: \"kubernetes.io/projected/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-kube-api-access-qkp2k\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867476 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867537 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867594 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867628 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867669 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.869312 master-0 kubenswrapper[33867]: I0219 03:42:10.867763 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-scripts\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.873285 master-0 kubenswrapper[33867]: I0219 03:42:10.872378 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-scripts\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.883293 master-0 kubenswrapper[33867]: I0219 03:42:10.880401 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-config\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.883529 master-0 kubenswrapper[33867]: I0219 03:42:10.883431 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.886403 master-0 kubenswrapper[33867]: I0219 03:42:10.884806 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.886403 master-0 kubenswrapper[33867]: I0219 03:42:10.885313 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.886403 master-0 kubenswrapper[33867]: I0219 03:42:10.885365 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.886644 master-0 kubenswrapper[33867]: I0219 03:42:10.886609 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:10.890283 master-0 kubenswrapper[33867]: I0219 03:42:10.889271 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:42:10.916605 master-0 kubenswrapper[33867]: I0219 03:42:10.898588 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:11.273285 master-0 kubenswrapper[33867]: I0219 03:42:11.272528 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-054a4-api-0" podUID="da327fb4-7852-4866-bb8f-8b2930854e24" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.248:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:42:11.412407 master-0 kubenswrapper[33867]: I0219 03:42:11.411033 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkp2k\" (UniqueName: \"kubernetes.io/projected/bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868-kube-api-access-qkp2k\") pod \"ironic-inspector-0\" (UID: \"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868\") " pod="openstack/ironic-inspector-0" Feb 19 03:42:11.415216 master-0 kubenswrapper[33867]: I0219 03:42:11.415158 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 19 03:42:11.647918 master-0 kubenswrapper[33867]: I0219 03:42:11.641710 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:11.727207 master-0 kubenswrapper[33867]: I0219 03:42:11.722676 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:42:11.727207 master-0 kubenswrapper[33867]: I0219 03:42:11.722975 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="dnsmasq-dns" containerID="cri-o://d01cd5765bd85ccd47fb141ec184364829833444bed578b10a95e6370705d5cd" gracePeriod=10 Feb 19 03:42:12.402970 master-0 kubenswrapper[33867]: I0219 03:42:12.402911 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 19 03:42:12.938608 master-0 kubenswrapper[33867]: I0219 03:42:12.938330 33867 generic.go:334] "Generic (PLEG): container finished" podID="9c830f8b-3d33-4879-91b9-bd374a1e695b" containerID="6268b4ee022718f2d71c2e6dda861d4c69ac3866c1ba3a66b3efe63aa41c28c6" exitCode=0 Feb 19 03:42:12.938608 master-0 kubenswrapper[33867]: I0219 03:42:12.938437 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerDied","Data":"6268b4ee022718f2d71c2e6dda861d4c69ac3866c1ba3a66b3efe63aa41c28c6"} Feb 19 03:42:12.946988 master-0 kubenswrapper[33867]: I0219 03:42:12.944207 33867 generic.go:334] "Generic (PLEG): container finished" podID="d3018370-400e-497b-b612-0f8ac987acf7" containerID="d01cd5765bd85ccd47fb141ec184364829833444bed578b10a95e6370705d5cd" exitCode=0 Feb 19 03:42:12.946988 master-0 kubenswrapper[33867]: I0219 03:42:12.944267 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" event={"ID":"d3018370-400e-497b-b612-0f8ac987acf7","Type":"ContainerDied","Data":"d01cd5765bd85ccd47fb141ec184364829833444bed578b10a95e6370705d5cd"} Feb 19 03:42:12.951638 master-0 kubenswrapper[33867]: I0219 03:42:12.951563 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"5f6f2ae2c73a8f384d1cbe0470e4c1d581f68c9498f8461a8674f87294050950"} Feb 19 03:42:13.202330 master-0 kubenswrapper[33867]: I0219 03:42:13.202247 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:42:13.353477 master-0 kubenswrapper[33867]: I0219 03:42:13.353392 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.353741 master-0 kubenswrapper[33867]: I0219 03:42:13.353700 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.353782 master-0 kubenswrapper[33867]: I0219 03:42:13.353740 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.353782 master-0 kubenswrapper[33867]: I0219 03:42:13.353773 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.353846 master-0 kubenswrapper[33867]: I0219 03:42:13.353821 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4hdk\" (UniqueName: \"kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.353888 master-0 kubenswrapper[33867]: I0219 03:42:13.353862 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb\") pod \"d3018370-400e-497b-b612-0f8ac987acf7\" (UID: \"d3018370-400e-497b-b612-0f8ac987acf7\") " Feb 19 03:42:13.360992 master-0 kubenswrapper[33867]: I0219 03:42:13.360907 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk" (OuterVolumeSpecName: "kube-api-access-x4hdk") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "kube-api-access-x4hdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:42:13.415463 master-0 kubenswrapper[33867]: I0219 03:42:13.415332 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:42:13.418540 master-0 kubenswrapper[33867]: I0219 03:42:13.418470 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config" (OuterVolumeSpecName: "config") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:42:13.419052 master-0 kubenswrapper[33867]: I0219 03:42:13.419027 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:42:13.431990 master-0 kubenswrapper[33867]: I0219 03:42:13.431933 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:42:13.445619 master-0 kubenswrapper[33867]: I0219 03:42:13.445569 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3018370-400e-497b-b612-0f8ac987acf7" (UID: "d3018370-400e-497b-b612-0f8ac987acf7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:42:13.457235 master-0 kubenswrapper[33867]: I0219 03:42:13.457197 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.457235 master-0 kubenswrapper[33867]: I0219 03:42:13.457236 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.457428 master-0 kubenswrapper[33867]: I0219 03:42:13.457246 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.457428 master-0 kubenswrapper[33867]: I0219 03:42:13.457281 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4hdk\" (UniqueName: \"kubernetes.io/projected/d3018370-400e-497b-b612-0f8ac987acf7-kube-api-access-x4hdk\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.457428 master-0 kubenswrapper[33867]: I0219 03:42:13.457297 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.457428 master-0 kubenswrapper[33867]: I0219 03:42:13.457309 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3018370-400e-497b-b612-0f8ac987acf7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:13.963226 master-0 kubenswrapper[33867]: I0219 03:42:13.963111 33867 generic.go:334] "Generic (PLEG): container finished" podID="bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868" containerID="043ca2f83e3081813addfbf725a1ed8a06864b37de7f63177fc379ed22e212b4" exitCode=0 Feb 19 03:42:13.963226 master-0 kubenswrapper[33867]: I0219 03:42:13.963212 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerDied","Data":"043ca2f83e3081813addfbf725a1ed8a06864b37de7f63177fc379ed22e212b4"} Feb 19 03:42:13.967947 master-0 kubenswrapper[33867]: I0219 03:42:13.967901 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" event={"ID":"d3018370-400e-497b-b612-0f8ac987acf7","Type":"ContainerDied","Data":"b8d688f299c94b6361afdbec7c709819df7d9cd27b7f750a945b24aa310d851d"} Feb 19 03:42:13.968018 master-0 kubenswrapper[33867]: I0219 03:42:13.967963 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7989d45967-nbj4z" Feb 19 03:42:13.968718 master-0 kubenswrapper[33867]: I0219 03:42:13.967970 33867 scope.go:117] "RemoveContainer" containerID="d01cd5765bd85ccd47fb141ec184364829833444bed578b10a95e6370705d5cd" Feb 19 03:42:13.998584 master-0 kubenswrapper[33867]: I0219 03:42:13.998528 33867 scope.go:117] "RemoveContainer" containerID="2f6e8cebffc3d8728822bedbebdd8e8be1a9e01d4a4ecc036ab8735295c61532" Feb 19 03:42:14.049427 master-0 kubenswrapper[33867]: I0219 03:42:14.049378 33867 trace.go:236] Trace[1508879930]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (19-Feb-2026 03:42:12.732) (total time: 1316ms): Feb 19 03:42:14.049427 master-0 kubenswrapper[33867]: Trace[1508879930]: [1.316467469s] [1.316467469s] END Feb 19 03:42:14.088656 master-0 kubenswrapper[33867]: I0219 03:42:14.088555 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:42:14.115532 master-0 kubenswrapper[33867]: I0219 03:42:14.115447 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7989d45967-nbj4z"] Feb 19 03:42:14.970835 master-0 kubenswrapper[33867]: I0219 03:42:14.970735 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3018370-400e-497b-b612-0f8ac987acf7" path="/var/lib/kubelet/pods/d3018370-400e-497b-b612-0f8ac987acf7/volumes" Feb 19 03:42:16.149846 master-0 kubenswrapper[33867]: I0219 03:42:16.149758 33867 trace.go:236] Trace[2136412628]: "Calculate volume metrics of var-lib-ironic for pod openstack/ironic-conductor-0" (19-Feb-2026 03:42:14.939) (total time: 1209ms): Feb 19 03:42:16.149846 master-0 kubenswrapper[33867]: Trace[2136412628]: [1.209924963s] [1.209924963s] END Feb 19 03:42:16.634832 master-0 kubenswrapper[33867]: I0219 03:42:16.634757 33867 trace.go:236] Trace[1487713880]: "Calculate volume metrics of glance for pod openstack/glance-fa7ca-default-internal-api-0" (19-Feb-2026 03:42:14.939) (total time: 1695ms): Feb 19 03:42:16.634832 master-0 kubenswrapper[33867]: Trace[1487713880]: [1.695508335s] [1.695508335s] END Feb 19 03:42:16.806823 master-0 kubenswrapper[33867]: I0219 03:42:16.806760 33867 trace.go:236] Trace[1096833114]: "Calculate volume metrics of glance for pod openstack/glance-fa7ca-default-external-api-0" (19-Feb-2026 03:42:14.939) (total time: 1867ms): Feb 19 03:42:16.806823 master-0 kubenswrapper[33867]: Trace[1096833114]: [1.867558888s] [1.867558888s] END Feb 19 03:42:28.916589 master-0 kubenswrapper[33867]: I0219 03:42:28.916542 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 19 03:42:29.416679 master-0 kubenswrapper[33867]: I0219 03:42:29.416619 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nt89l" event={"ID":"89845d0a-587f-448f-802a-16572691093c","Type":"ContainerStarted","Data":"2d88822f7aeaf366f49d7cc01d5ad974851322c8a543dff23f5c1c32aa47c5a1"} Feb 19 03:42:29.447598 master-0 kubenswrapper[33867]: I0219 03:42:29.447419 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nt89l" podStartSLOduration=7.299716099 podStartE2EDuration="34.447395063s" podCreationTimestamp="2026-02-19 03:41:55 +0000 UTC" firstStartedPulling="2026-02-19 03:42:01.75241666 +0000 UTC m=+1127.049087271" lastFinishedPulling="2026-02-19 03:42:28.900095624 +0000 UTC m=+1154.196766235" observedRunningTime="2026-02-19 03:42:29.444673736 +0000 UTC m=+1154.741344387" watchObservedRunningTime="2026-02-19 03:42:29.447395063 +0000 UTC m=+1154.744065674" Feb 19 03:42:30.438078 master-0 kubenswrapper[33867]: I0219 03:42:30.438014 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"9378dbf8125e2380afa5b80f8e4f87c3195a20f059f239d40341c53c712b83a0"} Feb 19 03:42:30.442487 master-0 kubenswrapper[33867]: I0219 03:42:30.442415 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerDied","Data":"c6322080c544575f94178326e909e8aab415cfbafc4ca40c9859f86bff9e2f61"} Feb 19 03:42:30.442648 master-0 kubenswrapper[33867]: I0219 03:42:30.442316 33867 generic.go:334] "Generic (PLEG): container finished" podID="bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868" containerID="c6322080c544575f94178326e909e8aab415cfbafc4ca40c9859f86bff9e2f61" exitCode=0 Feb 19 03:42:31.460996 master-0 kubenswrapper[33867]: I0219 03:42:31.460936 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"7492ab7b54c2ff95a9aa9afbe56f3636cba782340682ea5f76f09d3a5cf5100a"} Feb 19 03:42:32.493433 master-0 kubenswrapper[33867]: I0219 03:42:32.493381 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"046b7b1a511452cd342a533565cac3fc87e7d04d45be7b601beff68f3c2efdeb"} Feb 19 03:42:32.494088 master-0 kubenswrapper[33867]: I0219 03:42:32.494064 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"eaed966d70a115fe8dd7f69c24ad9d115c9b41db66c3929b8dcc833a8e91554e"} Feb 19 03:42:33.520653 master-0 kubenswrapper[33867]: I0219 03:42:33.520582 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"f2757d413684fa7fe2126c9e8f7bfaf940a09508e4424bae42efe9ee8ae0906f"} Feb 19 03:42:33.520653 master-0 kubenswrapper[33867]: I0219 03:42:33.520638 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868","Type":"ContainerStarted","Data":"cb26b10bc033abc222f3ac24abd3701a9f4f637d98abc610758c14e4a899f919"} Feb 19 03:42:33.521673 master-0 kubenswrapper[33867]: I0219 03:42:33.520902 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 19 03:42:33.521673 master-0 kubenswrapper[33867]: I0219 03:42:33.521143 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 19 03:42:33.625078 master-0 kubenswrapper[33867]: I0219 03:42:33.624867 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=11.677274258 podStartE2EDuration="26.624836182s" podCreationTimestamp="2026-02-19 03:42:07 +0000 UTC" firstStartedPulling="2026-02-19 03:42:13.964838538 +0000 UTC m=+1139.261509149" lastFinishedPulling="2026-02-19 03:42:28.912400462 +0000 UTC m=+1154.209071073" observedRunningTime="2026-02-19 03:42:33.609760725 +0000 UTC m=+1158.906431346" watchObservedRunningTime="2026-02-19 03:42:33.624836182 +0000 UTC m=+1158.921506793" Feb 19 03:42:36.416370 master-0 kubenswrapper[33867]: I0219 03:42:36.416210 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 19 03:42:36.417333 master-0 kubenswrapper[33867]: I0219 03:42:36.416478 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 19 03:42:36.468327 master-0 kubenswrapper[33867]: I0219 03:42:36.468223 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 19 03:42:37.581569 master-0 kubenswrapper[33867]: I0219 03:42:37.581345 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.415888 master-0 kubenswrapper[33867]: I0219 03:42:41.415813 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.416684 master-0 kubenswrapper[33867]: I0219 03:42:41.416328 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.436593 master-0 kubenswrapper[33867]: I0219 03:42:41.436506 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.439341 master-0 kubenswrapper[33867]: I0219 03:42:41.439298 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.633630 master-0 kubenswrapper[33867]: I0219 03:42:41.633550 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 19 03:42:41.635936 master-0 kubenswrapper[33867]: I0219 03:42:41.635801 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 19 03:42:47.706641 master-0 kubenswrapper[33867]: I0219 03:42:47.706551 33867 generic.go:334] "Generic (PLEG): container finished" podID="89845d0a-587f-448f-802a-16572691093c" containerID="2d88822f7aeaf366f49d7cc01d5ad974851322c8a543dff23f5c1c32aa47c5a1" exitCode=0 Feb 19 03:42:47.706641 master-0 kubenswrapper[33867]: I0219 03:42:47.706627 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nt89l" event={"ID":"89845d0a-587f-448f-802a-16572691093c","Type":"ContainerDied","Data":"2d88822f7aeaf366f49d7cc01d5ad974851322c8a543dff23f5c1c32aa47c5a1"} Feb 19 03:42:49.193337 master-0 kubenswrapper[33867]: I0219 03:42:49.193280 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:42:49.349069 master-0 kubenswrapper[33867]: I0219 03:42:49.348929 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data\") pod \"89845d0a-587f-448f-802a-16572691093c\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " Feb 19 03:42:49.349069 master-0 kubenswrapper[33867]: I0219 03:42:49.349019 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts\") pod \"89845d0a-587f-448f-802a-16572691093c\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " Feb 19 03:42:49.349340 master-0 kubenswrapper[33867]: I0219 03:42:49.349132 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46p22\" (UniqueName: \"kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22\") pod \"89845d0a-587f-448f-802a-16572691093c\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " Feb 19 03:42:49.349389 master-0 kubenswrapper[33867]: I0219 03:42:49.349343 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle\") pod \"89845d0a-587f-448f-802a-16572691093c\" (UID: \"89845d0a-587f-448f-802a-16572691093c\") " Feb 19 03:42:49.352711 master-0 kubenswrapper[33867]: I0219 03:42:49.352677 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts" (OuterVolumeSpecName: "scripts") pod "89845d0a-587f-448f-802a-16572691093c" (UID: "89845d0a-587f-448f-802a-16572691093c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:49.353285 master-0 kubenswrapper[33867]: I0219 03:42:49.353228 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22" (OuterVolumeSpecName: "kube-api-access-46p22") pod "89845d0a-587f-448f-802a-16572691093c" (UID: "89845d0a-587f-448f-802a-16572691093c"). InnerVolumeSpecName "kube-api-access-46p22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:42:49.389998 master-0 kubenswrapper[33867]: I0219 03:42:49.389907 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89845d0a-587f-448f-802a-16572691093c" (UID: "89845d0a-587f-448f-802a-16572691093c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:49.406122 master-0 kubenswrapper[33867]: I0219 03:42:49.406043 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data" (OuterVolumeSpecName: "config-data") pod "89845d0a-587f-448f-802a-16572691093c" (UID: "89845d0a-587f-448f-802a-16572691093c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:42:49.453193 master-0 kubenswrapper[33867]: I0219 03:42:49.453095 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:49.453193 master-0 kubenswrapper[33867]: I0219 03:42:49.453164 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:49.453193 master-0 kubenswrapper[33867]: I0219 03:42:49.453186 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46p22\" (UniqueName: \"kubernetes.io/projected/89845d0a-587f-448f-802a-16572691093c-kube-api-access-46p22\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:49.453193 master-0 kubenswrapper[33867]: I0219 03:42:49.453202 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845d0a-587f-448f-802a-16572691093c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:42:49.736036 master-0 kubenswrapper[33867]: I0219 03:42:49.735961 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nt89l" event={"ID":"89845d0a-587f-448f-802a-16572691093c","Type":"ContainerDied","Data":"5eb8d1ffb84a560cb4d2841c232a9d9774b511afb522a47dc95a686f4dfead17"} Feb 19 03:42:49.736036 master-0 kubenswrapper[33867]: I0219 03:42:49.736024 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb8d1ffb84a560cb4d2841c232a9d9774b511afb522a47dc95a686f4dfead17" Feb 19 03:42:49.736430 master-0 kubenswrapper[33867]: I0219 03:42:49.736165 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nt89l" Feb 19 03:42:49.890793 master-0 kubenswrapper[33867]: I0219 03:42:49.890701 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 03:42:49.891481 master-0 kubenswrapper[33867]: E0219 03:42:49.891455 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="init" Feb 19 03:42:49.891481 master-0 kubenswrapper[33867]: I0219 03:42:49.891477 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="init" Feb 19 03:42:49.891622 master-0 kubenswrapper[33867]: E0219 03:42:49.891490 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="dnsmasq-dns" Feb 19 03:42:49.891622 master-0 kubenswrapper[33867]: I0219 03:42:49.891500 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="dnsmasq-dns" Feb 19 03:42:49.891622 master-0 kubenswrapper[33867]: E0219 03:42:49.891558 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89845d0a-587f-448f-802a-16572691093c" containerName="nova-cell0-conductor-db-sync" Feb 19 03:42:49.891622 master-0 kubenswrapper[33867]: I0219 03:42:49.891565 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89845d0a-587f-448f-802a-16572691093c" containerName="nova-cell0-conductor-db-sync" Feb 19 03:42:49.891842 master-0 kubenswrapper[33867]: I0219 03:42:49.891816 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89845d0a-587f-448f-802a-16572691093c" containerName="nova-cell0-conductor-db-sync" Feb 19 03:42:49.891908 master-0 kubenswrapper[33867]: I0219 03:42:49.891843 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3018370-400e-497b-b612-0f8ac987acf7" containerName="dnsmasq-dns" Feb 19 03:42:49.893245 master-0 kubenswrapper[33867]: I0219 03:42:49.892906 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:49.898678 master-0 kubenswrapper[33867]: I0219 03:42:49.898616 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 03:42:49.920717 master-0 kubenswrapper[33867]: I0219 03:42:49.920506 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 03:42:49.966705 master-0 kubenswrapper[33867]: I0219 03:42:49.966561 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:49.967360 master-0 kubenswrapper[33867]: I0219 03:42:49.967334 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:49.967697 master-0 kubenswrapper[33867]: I0219 03:42:49.967677 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmj9r\" (UniqueName: \"kubernetes.io/projected/5f8f8802-8e26-45eb-aef9-8599459686af-kube-api-access-nmj9r\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.070107 master-0 kubenswrapper[33867]: I0219 03:42:50.069872 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmj9r\" (UniqueName: \"kubernetes.io/projected/5f8f8802-8e26-45eb-aef9-8599459686af-kube-api-access-nmj9r\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.070107 master-0 kubenswrapper[33867]: I0219 03:42:50.069988 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.070107 master-0 kubenswrapper[33867]: I0219 03:42:50.070098 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.074265 master-0 kubenswrapper[33867]: I0219 03:42:50.074153 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.074491 master-0 kubenswrapper[33867]: I0219 03:42:50.074315 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8f8802-8e26-45eb-aef9-8599459686af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.093848 master-0 kubenswrapper[33867]: I0219 03:42:50.093782 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmj9r\" (UniqueName: \"kubernetes.io/projected/5f8f8802-8e26-45eb-aef9-8599459686af-kube-api-access-nmj9r\") pod \"nova-cell0-conductor-0\" (UID: \"5f8f8802-8e26-45eb-aef9-8599459686af\") " pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.229642 master-0 kubenswrapper[33867]: I0219 03:42:50.229548 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:50.734230 master-0 kubenswrapper[33867]: I0219 03:42:50.734150 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 03:42:50.744090 master-0 kubenswrapper[33867]: W0219 03:42:50.744043 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f8f8802_8e26_45eb_aef9_8599459686af.slice/crio-fe0d02ce1f39d110d633832ef93b06ce89db1279fc7cc82cda9eccec3b562d79 WatchSource:0}: Error finding container fe0d02ce1f39d110d633832ef93b06ce89db1279fc7cc82cda9eccec3b562d79: Status 404 returned error can't find the container with id fe0d02ce1f39d110d633832ef93b06ce89db1279fc7cc82cda9eccec3b562d79 Feb 19 03:42:51.758583 master-0 kubenswrapper[33867]: I0219 03:42:51.758522 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5f8f8802-8e26-45eb-aef9-8599459686af","Type":"ContainerStarted","Data":"45aa2c473f5afbaf882fed6da7283235d861a4db4b9ffe73106de1ca0d3f3fb0"} Feb 19 03:42:51.758583 master-0 kubenswrapper[33867]: I0219 03:42:51.758581 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5f8f8802-8e26-45eb-aef9-8599459686af","Type":"ContainerStarted","Data":"fe0d02ce1f39d110d633832ef93b06ce89db1279fc7cc82cda9eccec3b562d79"} Feb 19 03:42:51.759215 master-0 kubenswrapper[33867]: I0219 03:42:51.758626 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:55.288103 master-0 kubenswrapper[33867]: I0219 03:42:55.288036 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 03:42:55.344165 master-0 kubenswrapper[33867]: I0219 03:42:55.344071 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.34401858 podStartE2EDuration="6.34401858s" podCreationTimestamp="2026-02-19 03:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:42:51.784997623 +0000 UTC m=+1177.081668234" watchObservedRunningTime="2026-02-19 03:42:55.34401858 +0000 UTC m=+1180.640689211" Feb 19 03:42:55.875747 master-0 kubenswrapper[33867]: I0219 03:42:55.875629 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-548gx"] Feb 19 03:42:55.878603 master-0 kubenswrapper[33867]: I0219 03:42:55.878546 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:55.883282 master-0 kubenswrapper[33867]: I0219 03:42:55.883216 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 03:42:55.883576 master-0 kubenswrapper[33867]: I0219 03:42:55.883475 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 03:42:55.904374 master-0 kubenswrapper[33867]: I0219 03:42:55.902973 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-548gx"] Feb 19 03:42:55.954093 master-0 kubenswrapper[33867]: I0219 03:42:55.952861 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:55.954093 master-0 kubenswrapper[33867]: I0219 03:42:55.953075 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqt69\" (UniqueName: \"kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:55.954093 master-0 kubenswrapper[33867]: I0219 03:42:55.953106 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:55.954093 master-0 kubenswrapper[33867]: I0219 03:42:55.953134 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:55.957094 master-0 kubenswrapper[33867]: I0219 03:42:55.956997 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 19 03:42:55.959131 master-0 kubenswrapper[33867]: I0219 03:42:55.959084 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:55.964886 master-0 kubenswrapper[33867]: I0219 03:42:55.963647 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 19 03:42:56.007705 master-0 kubenswrapper[33867]: I0219 03:42:56.005087 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.116474 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85xfj\" (UniqueName: \"kubernetes.io/projected/23d36214-70ab-4c0a-837d-5a5585b130ac-kube-api-access-85xfj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.116773 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.117137 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqt69\" (UniqueName: \"kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.117178 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.117232 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.122050 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.126125 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.126474 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.126615 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.130949 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.131015 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.133537 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:42:56.141483 master-0 kubenswrapper[33867]: I0219 03:42:56.136817 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 03:42:56.143280 master-0 kubenswrapper[33867]: I0219 03:42:56.143200 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:42:56.144104 master-0 kubenswrapper[33867]: I0219 03:42:56.144074 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqt69\" (UniqueName: \"kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69\") pod \"nova-cell0-cell-mapping-548gx\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.234759 master-0 kubenswrapper[33867]: I0219 03:42:56.233507 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:42:56.235611 master-0 kubenswrapper[33867]: I0219 03:42:56.235569 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.235759 master-0 kubenswrapper[33867]: I0219 03:42:56.235730 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.246342 master-0 kubenswrapper[33867]: I0219 03:42:56.244005 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:42:56.246342 master-0 kubenswrapper[33867]: I0219 03:42:56.244697 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.246342 master-0 kubenswrapper[33867]: I0219 03:42:56.244869 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.246342 master-0 kubenswrapper[33867]: I0219 03:42:56.244952 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85xfj\" (UniqueName: \"kubernetes.io/projected/23d36214-70ab-4c0a-837d-5a5585b130ac-kube-api-access-85xfj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.249184 master-0 kubenswrapper[33867]: I0219 03:42:56.247866 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 03:42:56.249184 master-0 kubenswrapper[33867]: I0219 03:42:56.248628 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcms\" (UniqueName: \"kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.249184 master-0 kubenswrapper[33867]: I0219 03:42:56.248846 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.249184 master-0 kubenswrapper[33867]: I0219 03:42:56.249072 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:42:56.249184 master-0 kubenswrapper[33867]: I0219 03:42:56.249120 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.266439 master-0 kubenswrapper[33867]: I0219 03:42:56.262615 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d36214-70ab-4c0a-837d-5a5585b130ac-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.266439 master-0 kubenswrapper[33867]: I0219 03:42:56.262991 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:42:56.308285 master-0 kubenswrapper[33867]: I0219 03:42:56.298746 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85xfj\" (UniqueName: \"kubernetes.io/projected/23d36214-70ab-4c0a-837d-5a5585b130ac-kube-api-access-85xfj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"23d36214-70ab-4c0a-837d-5a5585b130ac\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354072 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354144 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcms\" (UniqueName: \"kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354628 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354661 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354862 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.354957 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbg5d\" (UniqueName: \"kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.355068 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.355084 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.358276 master-0 kubenswrapper[33867]: I0219 03:42:56.356581 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.370210 master-0 kubenswrapper[33867]: I0219 03:42:56.365219 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.373285 master-0 kubenswrapper[33867]: I0219 03:42:56.372400 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.430522 master-0 kubenswrapper[33867]: I0219 03:42:56.429699 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcms\" (UniqueName: \"kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms\") pod \"nova-api-0\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " pod="openstack/nova-api-0" Feb 19 03:42:56.440736 master-0 kubenswrapper[33867]: I0219 03:42:56.438783 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:42:56.440736 master-0 kubenswrapper[33867]: I0219 03:42:56.440702 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:42:56.445672 master-0 kubenswrapper[33867]: I0219 03:42:56.445636 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 03:42:56.457289 master-0 kubenswrapper[33867]: I0219 03:42:56.457222 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbg5d\" (UniqueName: \"kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.457698 master-0 kubenswrapper[33867]: I0219 03:42:56.457676 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.457798 master-0 kubenswrapper[33867]: I0219 03:42:56.457785 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.457955 master-0 kubenswrapper[33867]: I0219 03:42:56.457937 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.464431 master-0 kubenswrapper[33867]: I0219 03:42:56.464368 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.465237 master-0 kubenswrapper[33867]: I0219 03:42:56.464450 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:42:56.466954 master-0 kubenswrapper[33867]: I0219 03:42:56.466920 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.476297 master-0 kubenswrapper[33867]: I0219 03:42:56.474110 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.478410 master-0 kubenswrapper[33867]: I0219 03:42:56.476849 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.497648 master-0 kubenswrapper[33867]: I0219 03:42:56.493966 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:42:56.504907 master-0 kubenswrapper[33867]: I0219 03:42:56.502932 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbg5d\" (UniqueName: \"kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d\") pod \"nova-metadata-0\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " pod="openstack/nova-metadata-0" Feb 19 03:42:56.532784 master-0 kubenswrapper[33867]: I0219 03:42:56.517540 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:42:56.532784 master-0 kubenswrapper[33867]: I0219 03:42:56.525713 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:42:56.532784 master-0 kubenswrapper[33867]: I0219 03:42:56.527429 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.532784 master-0 kubenswrapper[33867]: I0219 03:42:56.530112 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 03:42:56.545416 master-0 kubenswrapper[33867]: I0219 03:42:56.544224 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:42:56.559079 master-0 kubenswrapper[33867]: I0219 03:42:56.559011 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:42:56.560655 master-0 kubenswrapper[33867]: I0219 03:42:56.560600 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.560756 master-0 kubenswrapper[33867]: I0219 03:42:56.560673 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4fw\" (UniqueName: \"kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.560756 master-0 kubenswrapper[33867]: I0219 03:42:56.560698 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5zhf\" (UniqueName: \"kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.560756 master-0 kubenswrapper[33867]: I0219 03:42:56.560730 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zv9s\" (UniqueName: \"kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.560756 master-0 kubenswrapper[33867]: I0219 03:42:56.560758 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.560921 master-0 kubenswrapper[33867]: I0219 03:42:56.560788 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.560921 master-0 kubenswrapper[33867]: I0219 03:42:56.560856 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.560921 master-0 kubenswrapper[33867]: I0219 03:42:56.560894 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.561049 master-0 kubenswrapper[33867]: I0219 03:42:56.560926 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.561049 master-0 kubenswrapper[33867]: I0219 03:42:56.560959 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.561049 master-0 kubenswrapper[33867]: I0219 03:42:56.560981 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.561049 master-0 kubenswrapper[33867]: I0219 03:42:56.561002 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.598665 master-0 kubenswrapper[33867]: I0219 03:42:56.597897 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:42:56.605778 master-0 kubenswrapper[33867]: I0219 03:42:56.605727 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676490 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676713 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl4fw\" (UniqueName: \"kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676760 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5zhf\" (UniqueName: \"kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676853 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zv9s\" (UniqueName: \"kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676905 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.676970 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678018 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678413 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678523 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678591 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678655 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678683 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.678720 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.682933 master-0 kubenswrapper[33867]: I0219 03:42:56.680053 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.705205 master-0 kubenswrapper[33867]: I0219 03:42:56.705109 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.705799 master-0 kubenswrapper[33867]: I0219 03:42:56.705750 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.719549 master-0 kubenswrapper[33867]: I0219 03:42:56.711059 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.719549 master-0 kubenswrapper[33867]: I0219 03:42:56.715541 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.721346 master-0 kubenswrapper[33867]: I0219 03:42:56.720958 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.727409 master-0 kubenswrapper[33867]: I0219 03:42:56.727366 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.730738 master-0 kubenswrapper[33867]: I0219 03:42:56.730204 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5zhf\" (UniqueName: \"kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:56.732755 master-0 kubenswrapper[33867]: I0219 03:42:56.732711 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zv9s\" (UniqueName: \"kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s\") pod \"nova-scheduler-0\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " pod="openstack/nova-scheduler-0" Feb 19 03:42:56.734356 master-0 kubenswrapper[33867]: I0219 03:42:56.734216 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl4fw\" (UniqueName: \"kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw\") pod \"dnsmasq-dns-9c88576cf-mrwrb\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:56.745883 master-0 kubenswrapper[33867]: I0219 03:42:56.745802 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:57.008868 master-0 kubenswrapper[33867]: I0219 03:42:57.007911 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:42:57.027993 master-0 kubenswrapper[33867]: I0219 03:42:57.027830 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:42:57.046400 master-0 kubenswrapper[33867]: I0219 03:42:57.045595 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:42:57.058721 master-0 kubenswrapper[33867]: I0219 03:42:57.058574 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-47sq4"] Feb 19 03:42:57.061281 master-0 kubenswrapper[33867]: I0219 03:42:57.060629 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.071743 master-0 kubenswrapper[33867]: I0219 03:42:57.070867 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 03:42:57.071743 master-0 kubenswrapper[33867]: I0219 03:42:57.071312 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 03:42:57.099350 master-0 kubenswrapper[33867]: I0219 03:42:57.099186 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-47sq4"] Feb 19 03:42:57.155331 master-0 kubenswrapper[33867]: W0219 03:42:57.127712 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ac7e83_4fa3_459f_aa01_c4c5950264f0.slice/crio-4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9 WatchSource:0}: Error finding container 4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9: Status 404 returned error can't find the container with id 4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9 Feb 19 03:42:57.159270 master-0 kubenswrapper[33867]: I0219 03:42:57.156129 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktrp\" (UniqueName: \"kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.159270 master-0 kubenswrapper[33867]: I0219 03:42:57.156267 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.159270 master-0 kubenswrapper[33867]: I0219 03:42:57.156351 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.159270 master-0 kubenswrapper[33867]: I0219 03:42:57.156761 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-548gx"] Feb 19 03:42:57.159270 master-0 kubenswrapper[33867]: I0219 03:42:57.157189 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.264279 master-0 kubenswrapper[33867]: I0219 03:42:57.264095 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.264633 master-0 kubenswrapper[33867]: I0219 03:42:57.264601 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.264848 master-0 kubenswrapper[33867]: I0219 03:42:57.264815 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktrp\" (UniqueName: \"kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.264923 master-0 kubenswrapper[33867]: I0219 03:42:57.264860 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.275797 master-0 kubenswrapper[33867]: I0219 03:42:57.275655 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.276033 master-0 kubenswrapper[33867]: I0219 03:42:57.275931 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.283463 master-0 kubenswrapper[33867]: I0219 03:42:57.282806 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.285878 master-0 kubenswrapper[33867]: I0219 03:42:57.284926 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktrp\" (UniqueName: \"kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp\") pod \"nova-cell1-conductor-db-sync-47sq4\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.386268 master-0 kubenswrapper[33867]: I0219 03:42:57.386190 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:42:57.397775 master-0 kubenswrapper[33867]: I0219 03:42:57.397700 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 19 03:42:57.435356 master-0 kubenswrapper[33867]: I0219 03:42:57.435297 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:42:57.479705 master-0 kubenswrapper[33867]: W0219 03:42:57.479656 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23d36214_70ab_4c0a_837d_5a5585b130ac.slice/crio-3bd50491708b9f016d02355fe77c84c22b2d6eea2e0c1a55e9362144d2ce1d3f WatchSource:0}: Error finding container 3bd50491708b9f016d02355fe77c84c22b2d6eea2e0c1a55e9362144d2ce1d3f: Status 404 returned error can't find the container with id 3bd50491708b9f016d02355fe77c84c22b2d6eea2e0c1a55e9362144d2ce1d3f Feb 19 03:42:57.554934 master-0 kubenswrapper[33867]: I0219 03:42:57.543160 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:42:57.874333 master-0 kubenswrapper[33867]: I0219 03:42:57.873727 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:42:57.923507 master-0 kubenswrapper[33867]: I0219 03:42:57.923431 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:42:57.929417 master-0 kubenswrapper[33867]: I0219 03:42:57.927389 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566","Type":"ContainerStarted","Data":"089c9218424172930a2368694742883f34cf82a2f07f8dd7a69ca346c884fb57"} Feb 19 03:42:57.933009 master-0 kubenswrapper[33867]: I0219 03:42:57.932546 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerStarted","Data":"3febd17659981ada1ca50eb1bd9e5d90905284fb7005118fbd05582a7dfb1e29"} Feb 19 03:42:57.936874 master-0 kubenswrapper[33867]: I0219 03:42:57.934577 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-548gx" event={"ID":"e3ac7e83-4fa3-459f-aa01-c4c5950264f0","Type":"ContainerStarted","Data":"183c84739896a2a05db42b3b58f40c7fd5146e6e44e8ebcd7dc0af1107227754"} Feb 19 03:42:57.936874 master-0 kubenswrapper[33867]: I0219 03:42:57.934614 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-548gx" event={"ID":"e3ac7e83-4fa3-459f-aa01-c4c5950264f0","Type":"ContainerStarted","Data":"4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9"} Feb 19 03:42:57.955462 master-0 kubenswrapper[33867]: I0219 03:42:57.951513 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"23d36214-70ab-4c0a-837d-5a5585b130ac","Type":"ContainerStarted","Data":"3bd50491708b9f016d02355fe77c84c22b2d6eea2e0c1a55e9362144d2ce1d3f"} Feb 19 03:42:57.955462 master-0 kubenswrapper[33867]: I0219 03:42:57.953376 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerStarted","Data":"55ded175ca188bacd7ef5670ac4a90b61a1221935660e6b7416447806b5c8475"} Feb 19 03:42:57.981930 master-0 kubenswrapper[33867]: I0219 03:42:57.980235 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-548gx" podStartSLOduration=2.980209383 podStartE2EDuration="2.980209383s" podCreationTimestamp="2026-02-19 03:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:42:57.964996082 +0000 UTC m=+1183.261666703" watchObservedRunningTime="2026-02-19 03:42:57.980209383 +0000 UTC m=+1183.276879994" Feb 19 03:42:58.080382 master-0 kubenswrapper[33867]: I0219 03:42:58.079672 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:42:58.086005 master-0 kubenswrapper[33867]: W0219 03:42:58.084610 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce13545_41e9_40c6_9719_6aff7d61041d.slice/crio-99305ba67e3599b68509f524af9e15bd1ad362d89e7606900cabb6805bf7b793 WatchSource:0}: Error finding container 99305ba67e3599b68509f524af9e15bd1ad362d89e7606900cabb6805bf7b793: Status 404 returned error can't find the container with id 99305ba67e3599b68509f524af9e15bd1ad362d89e7606900cabb6805bf7b793 Feb 19 03:42:58.212993 master-0 kubenswrapper[33867]: W0219 03:42:58.212936 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a82b2c2_4eab_407e_a67e_07ecc654db86.slice/crio-f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804 WatchSource:0}: Error finding container f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804: Status 404 returned error can't find the container with id f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804 Feb 19 03:42:58.234074 master-0 kubenswrapper[33867]: I0219 03:42:58.234001 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-47sq4"] Feb 19 03:42:58.985533 master-0 kubenswrapper[33867]: I0219 03:42:58.985465 33867 generic.go:334] "Generic (PLEG): container finished" podID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerID="9d52ff981af2e54870ce5b3af090f415adfd0234976aa108dfb36b235caa1567" exitCode=0 Feb 19 03:42:58.987246 master-0 kubenswrapper[33867]: I0219 03:42:58.987096 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4fe3361f-a6c5-4180-b26f-03763a4c8db6","Type":"ContainerStarted","Data":"f27dc9b15b5203b75b4dbebca8152c85cb4ede867b0640b52fb50d9dc55cc724"} Feb 19 03:42:58.987246 master-0 kubenswrapper[33867]: I0219 03:42:58.987145 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" event={"ID":"9ce13545-41e9-40c6-9719-6aff7d61041d","Type":"ContainerDied","Data":"9d52ff981af2e54870ce5b3af090f415adfd0234976aa108dfb36b235caa1567"} Feb 19 03:42:58.987246 master-0 kubenswrapper[33867]: I0219 03:42:58.987162 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" event={"ID":"9ce13545-41e9-40c6-9719-6aff7d61041d","Type":"ContainerStarted","Data":"99305ba67e3599b68509f524af9e15bd1ad362d89e7606900cabb6805bf7b793"} Feb 19 03:42:59.000644 master-0 kubenswrapper[33867]: I0219 03:42:59.000599 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-47sq4" event={"ID":"2a82b2c2-4eab-407e-a67e-07ecc654db86","Type":"ContainerStarted","Data":"2095aca31c73a2200bb902b29ae7ea255905655c0cc23ac246cbeb8321f223ce"} Feb 19 03:42:59.000802 master-0 kubenswrapper[33867]: I0219 03:42:59.000647 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-47sq4" event={"ID":"2a82b2c2-4eab-407e-a67e-07ecc654db86","Type":"ContainerStarted","Data":"f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804"} Feb 19 03:42:59.049460 master-0 kubenswrapper[33867]: I0219 03:42:59.049350 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-47sq4" podStartSLOduration=3.049321069 podStartE2EDuration="3.049321069s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:42:59.031626468 +0000 UTC m=+1184.328297079" watchObservedRunningTime="2026-02-19 03:42:59.049321069 +0000 UTC m=+1184.345991680" Feb 19 03:43:00.088849 master-0 kubenswrapper[33867]: I0219 03:43:00.088773 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:00.129087 master-0 kubenswrapper[33867]: I0219 03:43:00.127579 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:02.077288 master-0 kubenswrapper[33867]: I0219 03:43:02.076629 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-log" containerID="cri-o://d298a6d08aafe6247a7aa171f663a82b0932d6ff8430235687a5f301b0e83704" gracePeriod=30 Feb 19 03:43:02.077288 master-0 kubenswrapper[33867]: I0219 03:43:02.076985 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerStarted","Data":"e80290ced589de13c0e29cb39c0e1a78290fcfd671241a87c1da6c4e71bdeb28"} Feb 19 03:43:02.077288 master-0 kubenswrapper[33867]: I0219 03:43:02.077019 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerStarted","Data":"d298a6d08aafe6247a7aa171f663a82b0932d6ff8430235687a5f301b0e83704"} Feb 19 03:43:02.078037 master-0 kubenswrapper[33867]: I0219 03:43:02.077555 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-metadata" containerID="cri-o://e80290ced589de13c0e29cb39c0e1a78290fcfd671241a87c1da6c4e71bdeb28" gracePeriod=30 Feb 19 03:43:02.082285 master-0 kubenswrapper[33867]: I0219 03:43:02.081013 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4fe3361f-a6c5-4180-b26f-03763a4c8db6","Type":"ContainerStarted","Data":"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8"} Feb 19 03:43:02.082285 master-0 kubenswrapper[33867]: I0219 03:43:02.081124 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8" gracePeriod=30 Feb 19 03:43:02.085639 master-0 kubenswrapper[33867]: I0219 03:43:02.085559 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerStarted","Data":"36e04d2f67cdcef7a15e2202ca0fa5543f94c69cb153ee8072059682c6f42f9c"} Feb 19 03:43:02.085639 master-0 kubenswrapper[33867]: I0219 03:43:02.085625 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerStarted","Data":"0c189e70ec793f01884591ae45b72258d58877883b555cf18a0b2c67b43b4a68"} Feb 19 03:43:02.090530 master-0 kubenswrapper[33867]: I0219 03:43:02.089481 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" event={"ID":"9ce13545-41e9-40c6-9719-6aff7d61041d","Type":"ContainerStarted","Data":"3af8fae0acb961ada9ace29d2211091de753e4a86e10f0ea515b0f365b204645"} Feb 19 03:43:02.091105 master-0 kubenswrapper[33867]: I0219 03:43:02.090698 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:43:02.096574 master-0 kubenswrapper[33867]: I0219 03:43:02.095586 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566","Type":"ContainerStarted","Data":"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9"} Feb 19 03:43:02.105285 master-0 kubenswrapper[33867]: I0219 03:43:02.104557 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.909715485 podStartE2EDuration="6.104538928s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="2026-02-19 03:42:57.572855517 +0000 UTC m=+1182.869526128" lastFinishedPulling="2026-02-19 03:43:00.76767896 +0000 UTC m=+1186.064349571" observedRunningTime="2026-02-19 03:43:02.100059131 +0000 UTC m=+1187.396729742" watchObservedRunningTime="2026-02-19 03:43:02.104538928 +0000 UTC m=+1187.401209539" Feb 19 03:43:02.132507 master-0 kubenswrapper[33867]: I0219 03:43:02.131673 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.2581522720000002 podStartE2EDuration="6.131652726s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="2026-02-19 03:42:57.894179316 +0000 UTC m=+1183.190849927" lastFinishedPulling="2026-02-19 03:43:00.76767977 +0000 UTC m=+1186.064350381" observedRunningTime="2026-02-19 03:43:02.121733295 +0000 UTC m=+1187.418403906" watchObservedRunningTime="2026-02-19 03:43:02.131652726 +0000 UTC m=+1187.428323337" Feb 19 03:43:02.190281 master-0 kubenswrapper[33867]: I0219 03:43:02.189286 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.909188621 podStartE2EDuration="6.189238597s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="2026-02-19 03:42:57.487785538 +0000 UTC m=+1182.784456149" lastFinishedPulling="2026-02-19 03:43:00.767835514 +0000 UTC m=+1186.064506125" observedRunningTime="2026-02-19 03:43:02.147761402 +0000 UTC m=+1187.444432013" watchObservedRunningTime="2026-02-19 03:43:02.189238597 +0000 UTC m=+1187.485909208" Feb 19 03:43:02.229291 master-0 kubenswrapper[33867]: I0219 03:43:02.220982 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.361699275 podStartE2EDuration="6.220957575s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="2026-02-19 03:42:57.908626326 +0000 UTC m=+1183.205296937" lastFinishedPulling="2026-02-19 03:43:00.767884626 +0000 UTC m=+1186.064555237" observedRunningTime="2026-02-19 03:43:02.170554648 +0000 UTC m=+1187.467225269" watchObservedRunningTime="2026-02-19 03:43:02.220957575 +0000 UTC m=+1187.517628186" Feb 19 03:43:02.229291 master-0 kubenswrapper[33867]: I0219 03:43:02.227964 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" podStartSLOduration=6.227942673 podStartE2EDuration="6.227942673s" podCreationTimestamp="2026-02-19 03:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:02.206985249 +0000 UTC m=+1187.503655880" watchObservedRunningTime="2026-02-19 03:43:02.227942673 +0000 UTC m=+1187.524613284" Feb 19 03:43:03.120408 master-0 kubenswrapper[33867]: I0219 03:43:03.120288 33867 generic.go:334] "Generic (PLEG): container finished" podID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerID="e80290ced589de13c0e29cb39c0e1a78290fcfd671241a87c1da6c4e71bdeb28" exitCode=0 Feb 19 03:43:03.120408 master-0 kubenswrapper[33867]: I0219 03:43:03.120379 33867 generic.go:334] "Generic (PLEG): container finished" podID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerID="d298a6d08aafe6247a7aa171f663a82b0932d6ff8430235687a5f301b0e83704" exitCode=143 Feb 19 03:43:03.120408 master-0 kubenswrapper[33867]: I0219 03:43:03.120384 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerDied","Data":"e80290ced589de13c0e29cb39c0e1a78290fcfd671241a87c1da6c4e71bdeb28"} Feb 19 03:43:03.121362 master-0 kubenswrapper[33867]: I0219 03:43:03.120456 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerDied","Data":"d298a6d08aafe6247a7aa171f663a82b0932d6ff8430235687a5f301b0e83704"} Feb 19 03:43:03.769347 master-0 kubenswrapper[33867]: I0219 03:43:03.766947 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:03.967226 master-0 kubenswrapper[33867]: I0219 03:43:03.967171 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs\") pod \"80553608-421f-443b-b2c4-bfcb0ab7cf70\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " Feb 19 03:43:03.967643 master-0 kubenswrapper[33867]: I0219 03:43:03.967614 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data\") pod \"80553608-421f-443b-b2c4-bfcb0ab7cf70\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " Feb 19 03:43:03.967685 master-0 kubenswrapper[33867]: I0219 03:43:03.967660 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbg5d\" (UniqueName: \"kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d\") pod \"80553608-421f-443b-b2c4-bfcb0ab7cf70\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " Feb 19 03:43:03.967724 master-0 kubenswrapper[33867]: I0219 03:43:03.967713 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle\") pod \"80553608-421f-443b-b2c4-bfcb0ab7cf70\" (UID: \"80553608-421f-443b-b2c4-bfcb0ab7cf70\") " Feb 19 03:43:03.969620 master-0 kubenswrapper[33867]: I0219 03:43:03.969578 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs" (OuterVolumeSpecName: "logs") pod "80553608-421f-443b-b2c4-bfcb0ab7cf70" (UID: "80553608-421f-443b-b2c4-bfcb0ab7cf70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:03.973577 master-0 kubenswrapper[33867]: I0219 03:43:03.973537 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d" (OuterVolumeSpecName: "kube-api-access-hbg5d") pod "80553608-421f-443b-b2c4-bfcb0ab7cf70" (UID: "80553608-421f-443b-b2c4-bfcb0ab7cf70"). InnerVolumeSpecName "kube-api-access-hbg5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:04.000975 master-0 kubenswrapper[33867]: I0219 03:43:04.000910 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data" (OuterVolumeSpecName: "config-data") pod "80553608-421f-443b-b2c4-bfcb0ab7cf70" (UID: "80553608-421f-443b-b2c4-bfcb0ab7cf70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:04.002429 master-0 kubenswrapper[33867]: I0219 03:43:04.002373 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80553608-421f-443b-b2c4-bfcb0ab7cf70" (UID: "80553608-421f-443b-b2c4-bfcb0ab7cf70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:04.070975 master-0 kubenswrapper[33867]: I0219 03:43:04.070910 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:04.070975 master-0 kubenswrapper[33867]: I0219 03:43:04.070959 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80553608-421f-443b-b2c4-bfcb0ab7cf70-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:04.070975 master-0 kubenswrapper[33867]: I0219 03:43:04.070970 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80553608-421f-443b-b2c4-bfcb0ab7cf70-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:04.070975 master-0 kubenswrapper[33867]: I0219 03:43:04.070988 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbg5d\" (UniqueName: \"kubernetes.io/projected/80553608-421f-443b-b2c4-bfcb0ab7cf70-kube-api-access-hbg5d\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:04.142911 master-0 kubenswrapper[33867]: I0219 03:43:04.142856 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:04.143661 master-0 kubenswrapper[33867]: I0219 03:43:04.143602 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80553608-421f-443b-b2c4-bfcb0ab7cf70","Type":"ContainerDied","Data":"3febd17659981ada1ca50eb1bd9e5d90905284fb7005118fbd05582a7dfb1e29"} Feb 19 03:43:04.143730 master-0 kubenswrapper[33867]: I0219 03:43:04.143678 33867 scope.go:117] "RemoveContainer" containerID="e80290ced589de13c0e29cb39c0e1a78290fcfd671241a87c1da6c4e71bdeb28" Feb 19 03:43:04.200191 master-0 kubenswrapper[33867]: I0219 03:43:04.198443 33867 scope.go:117] "RemoveContainer" containerID="d298a6d08aafe6247a7aa171f663a82b0932d6ff8430235687a5f301b0e83704" Feb 19 03:43:04.213281 master-0 kubenswrapper[33867]: I0219 03:43:04.211036 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:04.237320 master-0 kubenswrapper[33867]: I0219 03:43:04.237215 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: I0219 03:43:04.269460 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: E0219 03:43:04.270119 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-log" Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: I0219 03:43:04.270134 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-log" Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: E0219 03:43:04.270213 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-metadata" Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: I0219 03:43:04.270220 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-metadata" Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: I0219 03:43:04.270622 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-metadata" Feb 19 03:43:04.271640 master-0 kubenswrapper[33867]: I0219 03:43:04.270640 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" containerName="nova-metadata-log" Feb 19 03:43:04.285292 master-0 kubenswrapper[33867]: I0219 03:43:04.281587 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:04.286444 master-0 kubenswrapper[33867]: I0219 03:43:04.286328 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 03:43:04.286570 master-0 kubenswrapper[33867]: I0219 03:43:04.286511 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 03:43:04.287801 master-0 kubenswrapper[33867]: I0219 03:43:04.287361 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:04.383285 master-0 kubenswrapper[33867]: I0219 03:43:04.383144 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.383528 master-0 kubenswrapper[33867]: I0219 03:43:04.383365 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.383528 master-0 kubenswrapper[33867]: I0219 03:43:04.383490 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7cc\" (UniqueName: \"kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.383608 master-0 kubenswrapper[33867]: I0219 03:43:04.383528 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.383705 master-0 kubenswrapper[33867]: I0219 03:43:04.383671 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.486720 master-0 kubenswrapper[33867]: I0219 03:43:04.486182 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.486841 master-0 kubenswrapper[33867]: I0219 03:43:04.486795 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7cc\" (UniqueName: \"kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.486841 master-0 kubenswrapper[33867]: I0219 03:43:04.486828 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.486946 master-0 kubenswrapper[33867]: I0219 03:43:04.486881 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.486997 master-0 kubenswrapper[33867]: I0219 03:43:04.486951 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.487947 master-0 kubenswrapper[33867]: I0219 03:43:04.486652 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.492433 master-0 kubenswrapper[33867]: I0219 03:43:04.492370 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.500937 master-0 kubenswrapper[33867]: I0219 03:43:04.500876 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.507957 master-0 kubenswrapper[33867]: I0219 03:43:04.507903 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.516034 master-0 kubenswrapper[33867]: I0219 03:43:04.515996 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7cc\" (UniqueName: \"kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc\") pod \"nova-metadata-0\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:04.620200 master-0 kubenswrapper[33867]: I0219 03:43:04.620067 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:04.972397 master-0 kubenswrapper[33867]: I0219 03:43:04.972324 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80553608-421f-443b-b2c4-bfcb0ab7cf70" path="/var/lib/kubelet/pods/80553608-421f-443b-b2c4-bfcb0ab7cf70/volumes" Feb 19 03:43:05.179298 master-0 kubenswrapper[33867]: I0219 03:43:05.179031 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:06.240036 master-0 kubenswrapper[33867]: I0219 03:43:06.239856 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerStarted","Data":"1616e0b96d9b80a0df216460ba47be7879a351119dec6deb06f8dce36d63aca0"} Feb 19 03:43:06.240036 master-0 kubenswrapper[33867]: I0219 03:43:06.239937 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerStarted","Data":"d26c91d08ee01017c58eb8c296bdc6aa5fc839195c2bbac11602f8253e4a7056"} Feb 19 03:43:06.240036 master-0 kubenswrapper[33867]: I0219 03:43:06.239964 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerStarted","Data":"9c0d78ce73171fd14c96cfc5164a6b4c3d10b33f49749d9f9364e9bdf60ce934"} Feb 19 03:43:06.283756 master-0 kubenswrapper[33867]: I0219 03:43:06.283630 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.283601664 podStartE2EDuration="2.283601664s" podCreationTimestamp="2026-02-19 03:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:06.277843551 +0000 UTC m=+1191.574514152" watchObservedRunningTime="2026-02-19 03:43:06.283601664 +0000 UTC m=+1191.580272275" Feb 19 03:43:06.519115 master-0 kubenswrapper[33867]: I0219 03:43:06.518988 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:43:06.519546 master-0 kubenswrapper[33867]: I0219 03:43:06.519526 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:43:07.009386 master-0 kubenswrapper[33867]: I0219 03:43:07.009289 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 03:43:07.009758 master-0 kubenswrapper[33867]: I0219 03:43:07.009453 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 03:43:07.032643 master-0 kubenswrapper[33867]: I0219 03:43:07.032544 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:43:07.046684 master-0 kubenswrapper[33867]: I0219 03:43:07.046487 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:07.060880 master-0 kubenswrapper[33867]: I0219 03:43:07.060816 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 03:43:07.192963 master-0 kubenswrapper[33867]: I0219 03:43:07.192884 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:43:07.193221 master-0 kubenswrapper[33867]: I0219 03:43:07.193156 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="dnsmasq-dns" containerID="cri-o://8a0072a9c94ef7f160b1700f6d08782b0b92352747719c7b36acb128c24987d7" gracePeriod=10 Feb 19 03:43:07.287477 master-0 kubenswrapper[33867]: I0219 03:43:07.287371 33867 generic.go:334] "Generic (PLEG): container finished" podID="e3ac7e83-4fa3-459f-aa01-c4c5950264f0" containerID="183c84739896a2a05db42b3b58f40c7fd5146e6e44e8ebcd7dc0af1107227754" exitCode=0 Feb 19 03:43:07.288114 master-0 kubenswrapper[33867]: I0219 03:43:07.287458 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-548gx" event={"ID":"e3ac7e83-4fa3-459f-aa01-c4c5950264f0","Type":"ContainerDied","Data":"183c84739896a2a05db42b3b58f40c7fd5146e6e44e8ebcd7dc0af1107227754"} Feb 19 03:43:07.335855 master-0 kubenswrapper[33867]: I0219 03:43:07.335774 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 03:43:07.601568 master-0 kubenswrapper[33867]: I0219 03:43:07.601494 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:07.601789 master-0 kubenswrapper[33867]: I0219 03:43:07.601530 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:08.321495 master-0 kubenswrapper[33867]: I0219 03:43:08.321319 33867 generic.go:334] "Generic (PLEG): container finished" podID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerID="8a0072a9c94ef7f160b1700f6d08782b0b92352747719c7b36acb128c24987d7" exitCode=0 Feb 19 03:43:08.321495 master-0 kubenswrapper[33867]: I0219 03:43:08.321401 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" event={"ID":"50b1a298-c4b0-4cfd-aa2a-163668bef18f","Type":"ContainerDied","Data":"8a0072a9c94ef7f160b1700f6d08782b0b92352747719c7b36acb128c24987d7"} Feb 19 03:43:09.624469 master-0 kubenswrapper[33867]: I0219 03:43:09.624405 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:43:09.624469 master-0 kubenswrapper[33867]: I0219 03:43:09.624473 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:43:11.374446 master-0 kubenswrapper[33867]: I0219 03:43:11.374340 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.1.2:5353: connect: connection refused" Feb 19 03:43:12.383550 master-0 kubenswrapper[33867]: I0219 03:43:12.383447 33867 generic.go:334] "Generic (PLEG): container finished" podID="2a82b2c2-4eab-407e-a67e-07ecc654db86" containerID="2095aca31c73a2200bb902b29ae7ea255905655c0cc23ac246cbeb8321f223ce" exitCode=0 Feb 19 03:43:12.383550 master-0 kubenswrapper[33867]: I0219 03:43:12.383524 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-47sq4" event={"ID":"2a82b2c2-4eab-407e-a67e-07ecc654db86","Type":"ContainerDied","Data":"2095aca31c73a2200bb902b29ae7ea255905655c0cc23ac246cbeb8321f223ce"} Feb 19 03:43:12.736417 master-0 kubenswrapper[33867]: I0219 03:43:12.736362 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:43:12.859553 master-0 kubenswrapper[33867]: I0219 03:43:12.859504 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts\") pod \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " Feb 19 03:43:12.859740 master-0 kubenswrapper[33867]: I0219 03:43:12.859692 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqt69\" (UniqueName: \"kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69\") pod \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " Feb 19 03:43:12.859901 master-0 kubenswrapper[33867]: I0219 03:43:12.859847 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data\") pod \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " Feb 19 03:43:12.860308 master-0 kubenswrapper[33867]: I0219 03:43:12.860275 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle\") pod \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\" (UID: \"e3ac7e83-4fa3-459f-aa01-c4c5950264f0\") " Feb 19 03:43:12.863398 master-0 kubenswrapper[33867]: I0219 03:43:12.863357 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts" (OuterVolumeSpecName: "scripts") pod "e3ac7e83-4fa3-459f-aa01-c4c5950264f0" (UID: "e3ac7e83-4fa3-459f-aa01-c4c5950264f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:12.865067 master-0 kubenswrapper[33867]: I0219 03:43:12.865011 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69" (OuterVolumeSpecName: "kube-api-access-xqt69") pod "e3ac7e83-4fa3-459f-aa01-c4c5950264f0" (UID: "e3ac7e83-4fa3-459f-aa01-c4c5950264f0"). InnerVolumeSpecName "kube-api-access-xqt69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:12.872816 master-0 kubenswrapper[33867]: I0219 03:43:12.872756 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:43:12.959866 master-0 kubenswrapper[33867]: I0219 03:43:12.959737 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data" (OuterVolumeSpecName: "config-data") pod "e3ac7e83-4fa3-459f-aa01-c4c5950264f0" (UID: "e3ac7e83-4fa3-459f-aa01-c4c5950264f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:12.963421 master-0 kubenswrapper[33867]: I0219 03:43:12.963377 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.963586 master-0 kubenswrapper[33867]: I0219 03:43:12.963558 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.963718 master-0 kubenswrapper[33867]: I0219 03:43:12.963693 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.963845 master-0 kubenswrapper[33867]: I0219 03:43:12.963819 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjjm\" (UniqueName: \"kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.963935 master-0 kubenswrapper[33867]: I0219 03:43:12.963917 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.964013 master-0 kubenswrapper[33867]: I0219 03:43:12.963995 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0\") pod \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\" (UID: \"50b1a298-c4b0-4cfd-aa2a-163668bef18f\") " Feb 19 03:43:12.965076 master-0 kubenswrapper[33867]: I0219 03:43:12.965044 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:12.965076 master-0 kubenswrapper[33867]: I0219 03:43:12.965074 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqt69\" (UniqueName: \"kubernetes.io/projected/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-kube-api-access-xqt69\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:12.965206 master-0 kubenswrapper[33867]: I0219 03:43:12.965090 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:12.973185 master-0 kubenswrapper[33867]: I0219 03:43:12.965580 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3ac7e83-4fa3-459f-aa01-c4c5950264f0" (UID: "e3ac7e83-4fa3-459f-aa01-c4c5950264f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:12.973185 master-0 kubenswrapper[33867]: I0219 03:43:12.970155 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm" (OuterVolumeSpecName: "kube-api-access-ngjjm") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "kube-api-access-ngjjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:13.031368 master-0 kubenswrapper[33867]: I0219 03:43:13.031286 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:13.037271 master-0 kubenswrapper[33867]: I0219 03:43:13.037177 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config" (OuterVolumeSpecName: "config") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:13.051118 master-0 kubenswrapper[33867]: I0219 03:43:13.050964 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:13.052463 master-0 kubenswrapper[33867]: I0219 03:43:13.052384 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:13.053864 master-0 kubenswrapper[33867]: I0219 03:43:13.053826 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50b1a298-c4b0-4cfd-aa2a-163668bef18f" (UID: "50b1a298-c4b0-4cfd-aa2a-163668bef18f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:13.068584 master-0 kubenswrapper[33867]: I0219 03:43:13.068542 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068584 master-0 kubenswrapper[33867]: I0219 03:43:13.068577 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ac7e83-4fa3-459f-aa01-c4c5950264f0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068584 master-0 kubenswrapper[33867]: I0219 03:43:13.068588 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068840 master-0 kubenswrapper[33867]: I0219 03:43:13.068600 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068840 master-0 kubenswrapper[33867]: I0219 03:43:13.068642 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjjm\" (UniqueName: \"kubernetes.io/projected/50b1a298-c4b0-4cfd-aa2a-163668bef18f-kube-api-access-ngjjm\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068840 master-0 kubenswrapper[33867]: I0219 03:43:13.068651 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.068840 master-0 kubenswrapper[33867]: I0219 03:43:13.068661 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b1a298-c4b0-4cfd-aa2a-163668bef18f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:13.399476 master-0 kubenswrapper[33867]: I0219 03:43:13.399397 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-548gx" event={"ID":"e3ac7e83-4fa3-459f-aa01-c4c5950264f0","Type":"ContainerDied","Data":"4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9"} Feb 19 03:43:13.399476 master-0 kubenswrapper[33867]: I0219 03:43:13.399466 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4bd37410b2157cd4649de1ec921a66feb646a40d5e092b134dcf8f8a532bc9" Feb 19 03:43:13.400125 master-0 kubenswrapper[33867]: I0219 03:43:13.399543 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-548gx" Feb 19 03:43:13.403202 master-0 kubenswrapper[33867]: I0219 03:43:13.403140 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"23d36214-70ab-4c0a-837d-5a5585b130ac","Type":"ContainerStarted","Data":"1d5e4a35a11c890b2a943ed4b7035965d82a2537f9b536f046498c898476a7ee"} Feb 19 03:43:13.404451 master-0 kubenswrapper[33867]: I0219 03:43:13.403569 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:43:13.406418 master-0 kubenswrapper[33867]: I0219 03:43:13.406380 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" event={"ID":"50b1a298-c4b0-4cfd-aa2a-163668bef18f","Type":"ContainerDied","Data":"ae2a38f0aa3299d91eabdebf07a7550253cb3be951a4327b025e5db96abb3eed"} Feb 19 03:43:13.406574 master-0 kubenswrapper[33867]: I0219 03:43:13.406556 33867 scope.go:117] "RemoveContainer" containerID="8a0072a9c94ef7f160b1700f6d08782b0b92352747719c7b36acb128c24987d7" Feb 19 03:43:13.406800 master-0 kubenswrapper[33867]: I0219 03:43:13.406489 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766d44d5cc-hz6f7" Feb 19 03:43:13.450777 master-0 kubenswrapper[33867]: I0219 03:43:13.450727 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 19 03:43:13.451021 master-0 kubenswrapper[33867]: I0219 03:43:13.450805 33867 scope.go:117] "RemoveContainer" containerID="3afb1359d6c50a81f3b0ff38176c94627b35b68dc2fc6b78dd9e20ed0faa36b4" Feb 19 03:43:13.488049 master-0 kubenswrapper[33867]: I0219 03:43:13.487847 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=3.436915196 podStartE2EDuration="18.487815407s" podCreationTimestamp="2026-02-19 03:42:55 +0000 UTC" firstStartedPulling="2026-02-19 03:42:57.491710219 +0000 UTC m=+1182.788380830" lastFinishedPulling="2026-02-19 03:43:12.54261043 +0000 UTC m=+1197.839281041" observedRunningTime="2026-02-19 03:43:13.440690982 +0000 UTC m=+1198.737361603" watchObservedRunningTime="2026-02-19 03:43:13.487815407 +0000 UTC m=+1198.784486018" Feb 19 03:43:13.591916 master-0 kubenswrapper[33867]: I0219 03:43:13.588227 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:43:13.600445 master-0 kubenswrapper[33867]: I0219 03:43:13.600382 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-766d44d5cc-hz6f7"] Feb 19 03:43:13.964671 master-0 kubenswrapper[33867]: I0219 03:43:13.964609 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:43:13.966033 master-0 kubenswrapper[33867]: I0219 03:43:13.965990 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:13.966315 master-0 kubenswrapper[33867]: I0219 03:43:13.966271 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerName="nova-scheduler-scheduler" containerID="cri-o://6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" gracePeriod=30 Feb 19 03:43:13.982053 master-0 kubenswrapper[33867]: I0219 03:43:13.981953 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:13.983602 master-0 kubenswrapper[33867]: I0219 03:43:13.982516 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-log" containerID="cri-o://0c189e70ec793f01884591ae45b72258d58877883b555cf18a0b2c67b43b4a68" gracePeriod=30 Feb 19 03:43:13.983602 master-0 kubenswrapper[33867]: I0219 03:43:13.982701 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-api" containerID="cri-o://36e04d2f67cdcef7a15e2202ca0fa5543f94c69cb153ee8072059682c6f42f9c" gracePeriod=30 Feb 19 03:43:14.100123 master-0 kubenswrapper[33867]: I0219 03:43:14.099929 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts\") pod \"2a82b2c2-4eab-407e-a67e-07ecc654db86\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " Feb 19 03:43:14.100123 master-0 kubenswrapper[33867]: I0219 03:43:14.100035 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data\") pod \"2a82b2c2-4eab-407e-a67e-07ecc654db86\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " Feb 19 03:43:14.100472 master-0 kubenswrapper[33867]: I0219 03:43:14.100120 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle\") pod \"2a82b2c2-4eab-407e-a67e-07ecc654db86\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " Feb 19 03:43:14.100472 master-0 kubenswrapper[33867]: I0219 03:43:14.100335 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktrp\" (UniqueName: \"kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp\") pod \"2a82b2c2-4eab-407e-a67e-07ecc654db86\" (UID: \"2a82b2c2-4eab-407e-a67e-07ecc654db86\") " Feb 19 03:43:14.106279 master-0 kubenswrapper[33867]: I0219 03:43:14.106218 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:14.106657 master-0 kubenswrapper[33867]: I0219 03:43:14.106422 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp" (OuterVolumeSpecName: "kube-api-access-tktrp") pod "2a82b2c2-4eab-407e-a67e-07ecc654db86" (UID: "2a82b2c2-4eab-407e-a67e-07ecc654db86"). InnerVolumeSpecName "kube-api-access-tktrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:14.106657 master-0 kubenswrapper[33867]: I0219 03:43:14.106549 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-log" containerID="cri-o://d26c91d08ee01017c58eb8c296bdc6aa5fc839195c2bbac11602f8253e4a7056" gracePeriod=30 Feb 19 03:43:14.107208 master-0 kubenswrapper[33867]: I0219 03:43:14.107167 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-metadata" containerID="cri-o://1616e0b96d9b80a0df216460ba47be7879a351119dec6deb06f8dce36d63aca0" gracePeriod=30 Feb 19 03:43:14.108883 master-0 kubenswrapper[33867]: I0219 03:43:14.108834 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts" (OuterVolumeSpecName: "scripts") pod "2a82b2c2-4eab-407e-a67e-07ecc654db86" (UID: "2a82b2c2-4eab-407e-a67e-07ecc654db86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:14.170727 master-0 kubenswrapper[33867]: I0219 03:43:14.170671 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a82b2c2-4eab-407e-a67e-07ecc654db86" (UID: "2a82b2c2-4eab-407e-a67e-07ecc654db86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:14.188784 master-0 kubenswrapper[33867]: I0219 03:43:14.188707 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data" (OuterVolumeSpecName: "config-data") pod "2a82b2c2-4eab-407e-a67e-07ecc654db86" (UID: "2a82b2c2-4eab-407e-a67e-07ecc654db86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:14.206300 master-0 kubenswrapper[33867]: I0219 03:43:14.205179 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:14.206300 master-0 kubenswrapper[33867]: I0219 03:43:14.205230 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktrp\" (UniqueName: \"kubernetes.io/projected/2a82b2c2-4eab-407e-a67e-07ecc654db86-kube-api-access-tktrp\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:14.206300 master-0 kubenswrapper[33867]: I0219 03:43:14.205246 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:14.206300 master-0 kubenswrapper[33867]: I0219 03:43:14.205270 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a82b2c2-4eab-407e-a67e-07ecc654db86-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:14.471221 master-0 kubenswrapper[33867]: I0219 03:43:14.469052 33867 generic.go:334] "Generic (PLEG): container finished" podID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerID="0c189e70ec793f01884591ae45b72258d58877883b555cf18a0b2c67b43b4a68" exitCode=143 Feb 19 03:43:14.471221 master-0 kubenswrapper[33867]: I0219 03:43:14.469154 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerDied","Data":"0c189e70ec793f01884591ae45b72258d58877883b555cf18a0b2c67b43b4a68"} Feb 19 03:43:14.476972 master-0 kubenswrapper[33867]: I0219 03:43:14.476921 33867 generic.go:334] "Generic (PLEG): container finished" podID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerID="1616e0b96d9b80a0df216460ba47be7879a351119dec6deb06f8dce36d63aca0" exitCode=0 Feb 19 03:43:14.476972 master-0 kubenswrapper[33867]: I0219 03:43:14.476966 33867 generic.go:334] "Generic (PLEG): container finished" podID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerID="d26c91d08ee01017c58eb8c296bdc6aa5fc839195c2bbac11602f8253e4a7056" exitCode=143 Feb 19 03:43:14.477179 master-0 kubenswrapper[33867]: I0219 03:43:14.476998 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerDied","Data":"1616e0b96d9b80a0df216460ba47be7879a351119dec6deb06f8dce36d63aca0"} Feb 19 03:43:14.477179 master-0 kubenswrapper[33867]: I0219 03:43:14.477053 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerDied","Data":"d26c91d08ee01017c58eb8c296bdc6aa5fc839195c2bbac11602f8253e4a7056"} Feb 19 03:43:14.485542 master-0 kubenswrapper[33867]: I0219 03:43:14.485457 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-47sq4" event={"ID":"2a82b2c2-4eab-407e-a67e-07ecc654db86","Type":"ContainerDied","Data":"f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804"} Feb 19 03:43:14.485885 master-0 kubenswrapper[33867]: I0219 03:43:14.485561 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90cb241a8a537e676f46e35baa935de92fc403909ef39d4e34d4615831cb804" Feb 19 03:43:14.485885 master-0 kubenswrapper[33867]: I0219 03:43:14.485501 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-47sq4" Feb 19 03:43:14.594549 master-0 kubenswrapper[33867]: I0219 03:43:14.594474 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 03:43:14.595135 master-0 kubenswrapper[33867]: E0219 03:43:14.595115 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="init" Feb 19 03:43:14.595135 master-0 kubenswrapper[33867]: I0219 03:43:14.595134 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="init" Feb 19 03:43:14.595223 master-0 kubenswrapper[33867]: E0219 03:43:14.595176 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="dnsmasq-dns" Feb 19 03:43:14.595223 master-0 kubenswrapper[33867]: I0219 03:43:14.595183 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="dnsmasq-dns" Feb 19 03:43:14.595313 master-0 kubenswrapper[33867]: E0219 03:43:14.595226 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a82b2c2-4eab-407e-a67e-07ecc654db86" containerName="nova-cell1-conductor-db-sync" Feb 19 03:43:14.595313 master-0 kubenswrapper[33867]: I0219 03:43:14.595234 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a82b2c2-4eab-407e-a67e-07ecc654db86" containerName="nova-cell1-conductor-db-sync" Feb 19 03:43:14.595313 master-0 kubenswrapper[33867]: E0219 03:43:14.595267 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ac7e83-4fa3-459f-aa01-c4c5950264f0" containerName="nova-manage" Feb 19 03:43:14.595313 master-0 kubenswrapper[33867]: I0219 03:43:14.595274 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ac7e83-4fa3-459f-aa01-c4c5950264f0" containerName="nova-manage" Feb 19 03:43:14.595521 master-0 kubenswrapper[33867]: I0219 03:43:14.595502 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" containerName="dnsmasq-dns" Feb 19 03:43:14.595565 master-0 kubenswrapper[33867]: I0219 03:43:14.595545 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ac7e83-4fa3-459f-aa01-c4c5950264f0" containerName="nova-manage" Feb 19 03:43:14.595602 master-0 kubenswrapper[33867]: I0219 03:43:14.595578 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a82b2c2-4eab-407e-a67e-07ecc654db86" containerName="nova-cell1-conductor-db-sync" Feb 19 03:43:14.596638 master-0 kubenswrapper[33867]: I0219 03:43:14.596616 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.600387 master-0 kubenswrapper[33867]: I0219 03:43:14.600341 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 03:43:14.611437 master-0 kubenswrapper[33867]: I0219 03:43:14.611245 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 03:43:14.728674 master-0 kubenswrapper[33867]: I0219 03:43:14.728615 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.728922 master-0 kubenswrapper[33867]: I0219 03:43:14.728764 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89n8x\" (UniqueName: \"kubernetes.io/projected/bfd1b147-452f-48ca-b3cb-5239ffabec00-kube-api-access-89n8x\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.728991 master-0 kubenswrapper[33867]: I0219 03:43:14.728937 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.830998 master-0 kubenswrapper[33867]: I0219 03:43:14.830919 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89n8x\" (UniqueName: \"kubernetes.io/projected/bfd1b147-452f-48ca-b3cb-5239ffabec00-kube-api-access-89n8x\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.831234 master-0 kubenswrapper[33867]: I0219 03:43:14.831089 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.831234 master-0 kubenswrapper[33867]: I0219 03:43:14.831197 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.834820 master-0 kubenswrapper[33867]: I0219 03:43:14.834780 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.836896 master-0 kubenswrapper[33867]: I0219 03:43:14.836846 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd1b147-452f-48ca-b3cb-5239ffabec00-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.899377 master-0 kubenswrapper[33867]: I0219 03:43:14.899276 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89n8x\" (UniqueName: \"kubernetes.io/projected/bfd1b147-452f-48ca-b3cb-5239ffabec00-kube-api-access-89n8x\") pod \"nova-cell1-conductor-0\" (UID: \"bfd1b147-452f-48ca-b3cb-5239ffabec00\") " pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.917975 master-0 kubenswrapper[33867]: I0219 03:43:14.917917 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:14.926119 master-0 kubenswrapper[33867]: I0219 03:43:14.926080 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:15.016964 master-0 kubenswrapper[33867]: I0219 03:43:15.016898 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b1a298-c4b0-4cfd-aa2a-163668bef18f" path="/var/lib/kubelet/pods/50b1a298-c4b0-4cfd-aa2a-163668bef18f/volumes" Feb 19 03:43:15.039366 master-0 kubenswrapper[33867]: I0219 03:43:15.039282 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle\") pod \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " Feb 19 03:43:15.039633 master-0 kubenswrapper[33867]: I0219 03:43:15.039534 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz7cc\" (UniqueName: \"kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc\") pod \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " Feb 19 03:43:15.039707 master-0 kubenswrapper[33867]: I0219 03:43:15.039667 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data\") pod \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " Feb 19 03:43:15.039841 master-0 kubenswrapper[33867]: I0219 03:43:15.039806 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs\") pod \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " Feb 19 03:43:15.039999 master-0 kubenswrapper[33867]: I0219 03:43:15.039963 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs\") pod \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\" (UID: \"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c\") " Feb 19 03:43:15.042232 master-0 kubenswrapper[33867]: I0219 03:43:15.041756 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs" (OuterVolumeSpecName: "logs") pod "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" (UID: "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:15.044886 master-0 kubenswrapper[33867]: I0219 03:43:15.044811 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc" (OuterVolumeSpecName: "kube-api-access-tz7cc") pod "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" (UID: "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c"). InnerVolumeSpecName "kube-api-access-tz7cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:15.077423 master-0 kubenswrapper[33867]: I0219 03:43:15.074540 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" (UID: "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:15.077423 master-0 kubenswrapper[33867]: I0219 03:43:15.075772 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data" (OuterVolumeSpecName: "config-data") pod "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" (UID: "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:15.127107 master-0 kubenswrapper[33867]: I0219 03:43:15.127027 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" (UID: "ea1f3123-5d55-4f68-8e7e-08bdbeb5442c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:15.143004 master-0 kubenswrapper[33867]: I0219 03:43:15.142715 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:15.143004 master-0 kubenswrapper[33867]: I0219 03:43:15.142751 33867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:15.143004 master-0 kubenswrapper[33867]: I0219 03:43:15.142760 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:15.143004 master-0 kubenswrapper[33867]: I0219 03:43:15.142768 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:15.143004 master-0 kubenswrapper[33867]: I0219 03:43:15.142778 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz7cc\" (UniqueName: \"kubernetes.io/projected/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c-kube-api-access-tz7cc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:15.503265 master-0 kubenswrapper[33867]: I0219 03:43:15.503167 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:15.504922 master-0 kubenswrapper[33867]: I0219 03:43:15.504867 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1f3123-5d55-4f68-8e7e-08bdbeb5442c","Type":"ContainerDied","Data":"9c0d78ce73171fd14c96cfc5164a6b4c3d10b33f49749d9f9364e9bdf60ce934"} Feb 19 03:43:15.505014 master-0 kubenswrapper[33867]: I0219 03:43:15.504948 33867 scope.go:117] "RemoveContainer" containerID="1616e0b96d9b80a0df216460ba47be7879a351119dec6deb06f8dce36d63aca0" Feb 19 03:43:15.559246 master-0 kubenswrapper[33867]: I0219 03:43:15.559215 33867 scope.go:117] "RemoveContainer" containerID="d26c91d08ee01017c58eb8c296bdc6aa5fc839195c2bbac11602f8253e4a7056" Feb 19 03:43:15.564928 master-0 kubenswrapper[33867]: I0219 03:43:15.564817 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 03:43:15.591485 master-0 kubenswrapper[33867]: I0219 03:43:15.591393 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:15.609956 master-0 kubenswrapper[33867]: I0219 03:43:15.609897 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: I0219 03:43:15.656745 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: E0219 03:43:15.657479 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-metadata" Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: I0219 03:43:15.657497 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-metadata" Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: E0219 03:43:15.657528 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-log" Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: I0219 03:43:15.657534 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-log" Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: I0219 03:43:15.657775 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-metadata" Feb 19 03:43:15.659069 master-0 kubenswrapper[33867]: I0219 03:43:15.657842 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" containerName="nova-metadata-log" Feb 19 03:43:15.681332 master-0 kubenswrapper[33867]: I0219 03:43:15.681203 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:15.683721 master-0 kubenswrapper[33867]: I0219 03:43:15.683669 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:15.689588 master-0 kubenswrapper[33867]: I0219 03:43:15.689332 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 03:43:15.689899 master-0 kubenswrapper[33867]: I0219 03:43:15.689876 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 03:43:15.793886 master-0 kubenswrapper[33867]: I0219 03:43:15.792461 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.793886 master-0 kubenswrapper[33867]: I0219 03:43:15.792614 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.793886 master-0 kubenswrapper[33867]: I0219 03:43:15.792655 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.793886 master-0 kubenswrapper[33867]: I0219 03:43:15.792763 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29njz\" (UniqueName: \"kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.793886 master-0 kubenswrapper[33867]: I0219 03:43:15.792878 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.896237 master-0 kubenswrapper[33867]: I0219 03:43:15.896158 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29njz\" (UniqueName: \"kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.896532 master-0 kubenswrapper[33867]: I0219 03:43:15.896360 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.896532 master-0 kubenswrapper[33867]: I0219 03:43:15.896472 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.896603 master-0 kubenswrapper[33867]: I0219 03:43:15.896566 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.896603 master-0 kubenswrapper[33867]: I0219 03:43:15.896595 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.902384 master-0 kubenswrapper[33867]: I0219 03:43:15.902351 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.902706 master-0 kubenswrapper[33867]: I0219 03:43:15.902682 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.906104 master-0 kubenswrapper[33867]: I0219 03:43:15.906072 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.906968 master-0 kubenswrapper[33867]: I0219 03:43:15.906938 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:15.923502 master-0 kubenswrapper[33867]: I0219 03:43:15.923185 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29njz\" (UniqueName: \"kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz\") pod \"nova-metadata-0\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " pod="openstack/nova-metadata-0" Feb 19 03:43:16.114204 master-0 kubenswrapper[33867]: I0219 03:43:16.114000 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:16.539105 master-0 kubenswrapper[33867]: I0219 03:43:16.539028 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfd1b147-452f-48ca-b3cb-5239ffabec00","Type":"ContainerStarted","Data":"9818327c636ab1ebf748ec1d18ad168a4d7ba9438847df43d9cfe64c3fe1a17f"} Feb 19 03:43:16.539105 master-0 kubenswrapper[33867]: I0219 03:43:16.539085 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfd1b147-452f-48ca-b3cb-5239ffabec00","Type":"ContainerStarted","Data":"5f670b06a44fe02f2a33b9d67c708bef46f8f94882cb83606abef967da2ab9f5"} Feb 19 03:43:16.540682 master-0 kubenswrapper[33867]: I0219 03:43:16.540649 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:16.545395 master-0 kubenswrapper[33867]: I0219 03:43:16.545210 33867 generic.go:334] "Generic (PLEG): container finished" podID="9c830f8b-3d33-4879-91b9-bd374a1e695b" containerID="9378dbf8125e2380afa5b80f8e4f87c3195a20f059f239d40341c53c712b83a0" exitCode=0 Feb 19 03:43:16.545395 master-0 kubenswrapper[33867]: I0219 03:43:16.545250 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerDied","Data":"9378dbf8125e2380afa5b80f8e4f87c3195a20f059f239d40341c53c712b83a0"} Feb 19 03:43:16.575538 master-0 kubenswrapper[33867]: I0219 03:43:16.575475 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5754521439999998 podStartE2EDuration="2.575452144s" podCreationTimestamp="2026-02-19 03:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:16.572529682 +0000 UTC m=+1201.869200283" watchObservedRunningTime="2026-02-19 03:43:16.575452144 +0000 UTC m=+1201.872122755" Feb 19 03:43:16.682924 master-0 kubenswrapper[33867]: I0219 03:43:16.682837 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:16.685746 master-0 kubenswrapper[33867]: W0219 03:43:16.685699 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32b71a14_a345_4919_8c5a_c5bf41644a29.slice/crio-8b1afd5e4732eea297b60dc4bcb0608d2cbf9fedaca33db66eef7b71cedd4b97 WatchSource:0}: Error finding container 8b1afd5e4732eea297b60dc4bcb0608d2cbf9fedaca33db66eef7b71cedd4b97: Status 404 returned error can't find the container with id 8b1afd5e4732eea297b60dc4bcb0608d2cbf9fedaca33db66eef7b71cedd4b97 Feb 19 03:43:16.969899 master-0 kubenswrapper[33867]: I0219 03:43:16.969852 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1f3123-5d55-4f68-8e7e-08bdbeb5442c" path="/var/lib/kubelet/pods/ea1f3123-5d55-4f68-8e7e-08bdbeb5442c/volumes" Feb 19 03:43:17.012316 master-0 kubenswrapper[33867]: E0219 03:43:17.012223 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:17.013953 master-0 kubenswrapper[33867]: E0219 03:43:17.013924 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:17.024341 master-0 kubenswrapper[33867]: E0219 03:43:17.024290 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:17.024484 master-0 kubenswrapper[33867]: E0219 03:43:17.024463 33867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerName="nova-scheduler-scheduler" Feb 19 03:43:17.561197 master-0 kubenswrapper[33867]: I0219 03:43:17.561122 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"35e5cbfe161a080dcef02c52f51ffd3e65fcca71a8df24b529d1e703f6501032"} Feb 19 03:43:17.563680 master-0 kubenswrapper[33867]: I0219 03:43:17.563637 33867 generic.go:334] "Generic (PLEG): container finished" podID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerID="36e04d2f67cdcef7a15e2202ca0fa5543f94c69cb153ee8072059682c6f42f9c" exitCode=0 Feb 19 03:43:17.563759 master-0 kubenswrapper[33867]: I0219 03:43:17.563726 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerDied","Data":"36e04d2f67cdcef7a15e2202ca0fa5543f94c69cb153ee8072059682c6f42f9c"} Feb 19 03:43:17.566441 master-0 kubenswrapper[33867]: I0219 03:43:17.566271 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerStarted","Data":"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a"} Feb 19 03:43:17.566441 master-0 kubenswrapper[33867]: I0219 03:43:17.566361 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerStarted","Data":"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0"} Feb 19 03:43:17.566441 master-0 kubenswrapper[33867]: I0219 03:43:17.566378 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerStarted","Data":"8b1afd5e4732eea297b60dc4bcb0608d2cbf9fedaca33db66eef7b71cedd4b97"} Feb 19 03:43:17.618156 master-0 kubenswrapper[33867]: I0219 03:43:17.617568 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.617546795 podStartE2EDuration="2.617546795s" podCreationTimestamp="2026-02-19 03:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:17.608307344 +0000 UTC m=+1202.904977955" watchObservedRunningTime="2026-02-19 03:43:17.617546795 +0000 UTC m=+1202.914217406" Feb 19 03:43:17.768247 master-0 kubenswrapper[33867]: I0219 03:43:17.768122 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:17.965932 master-0 kubenswrapper[33867]: I0219 03:43:17.965844 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs\") pod \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " Feb 19 03:43:17.966194 master-0 kubenswrapper[33867]: I0219 03:43:17.965999 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvcms\" (UniqueName: \"kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms\") pod \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " Feb 19 03:43:17.966194 master-0 kubenswrapper[33867]: I0219 03:43:17.966106 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle\") pod \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " Feb 19 03:43:17.966399 master-0 kubenswrapper[33867]: I0219 03:43:17.966369 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data\") pod \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\" (UID: \"eeb12d48-8381-49f0-a9c1-7cc46b857a0a\") " Feb 19 03:43:17.967033 master-0 kubenswrapper[33867]: I0219 03:43:17.966980 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs" (OuterVolumeSpecName: "logs") pod "eeb12d48-8381-49f0-a9c1-7cc46b857a0a" (UID: "eeb12d48-8381-49f0-a9c1-7cc46b857a0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:17.970177 master-0 kubenswrapper[33867]: I0219 03:43:17.970141 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms" (OuterVolumeSpecName: "kube-api-access-xvcms") pod "eeb12d48-8381-49f0-a9c1-7cc46b857a0a" (UID: "eeb12d48-8381-49f0-a9c1-7cc46b857a0a"). InnerVolumeSpecName "kube-api-access-xvcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:18.015597 master-0 kubenswrapper[33867]: I0219 03:43:18.015520 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb12d48-8381-49f0-a9c1-7cc46b857a0a" (UID: "eeb12d48-8381-49f0-a9c1-7cc46b857a0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:18.030481 master-0 kubenswrapper[33867]: I0219 03:43:18.030345 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data" (OuterVolumeSpecName: "config-data") pod "eeb12d48-8381-49f0-a9c1-7cc46b857a0a" (UID: "eeb12d48-8381-49f0-a9c1-7cc46b857a0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:18.070444 master-0 kubenswrapper[33867]: I0219 03:43:18.070369 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvcms\" (UniqueName: \"kubernetes.io/projected/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-kube-api-access-xvcms\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:18.070444 master-0 kubenswrapper[33867]: I0219 03:43:18.070422 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:18.070444 master-0 kubenswrapper[33867]: I0219 03:43:18.070435 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:18.070444 master-0 kubenswrapper[33867]: I0219 03:43:18.070447 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb12d48-8381-49f0-a9c1-7cc46b857a0a-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:18.586171 master-0 kubenswrapper[33867]: I0219 03:43:18.581341 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb12d48-8381-49f0-a9c1-7cc46b857a0a","Type":"ContainerDied","Data":"55ded175ca188bacd7ef5670ac4a90b61a1221935660e6b7416447806b5c8475"} Feb 19 03:43:18.586171 master-0 kubenswrapper[33867]: I0219 03:43:18.581389 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:18.586171 master-0 kubenswrapper[33867]: I0219 03:43:18.581438 33867 scope.go:117] "RemoveContainer" containerID="36e04d2f67cdcef7a15e2202ca0fa5543f94c69cb153ee8072059682c6f42f9c" Feb 19 03:43:18.590381 master-0 kubenswrapper[33867]: I0219 03:43:18.590226 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"308f5d2bffb113fedbe84b20f5fa52f50a6a700d8260ad0979212834d64dca27"} Feb 19 03:43:18.590381 master-0 kubenswrapper[33867]: I0219 03:43:18.590293 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"9c830f8b-3d33-4879-91b9-bd374a1e695b","Type":"ContainerStarted","Data":"39724eda098b7dc7c04b481fb97209b6b353dd6dd5f9972a744f41dcc8c48b67"} Feb 19 03:43:18.591776 master-0 kubenswrapper[33867]: I0219 03:43:18.590909 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 19 03:43:18.591776 master-0 kubenswrapper[33867]: I0219 03:43:18.590940 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 19 03:43:18.649801 master-0 kubenswrapper[33867]: I0219 03:43:18.649673 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=62.106462682 podStartE2EDuration="2m1.649642652s" podCreationTimestamp="2026-02-19 03:41:17 +0000 UTC" firstStartedPulling="2026-02-19 03:41:29.356507082 +0000 UTC m=+1094.653177693" lastFinishedPulling="2026-02-19 03:42:28.899687062 +0000 UTC m=+1154.196357663" observedRunningTime="2026-02-19 03:43:18.642972153 +0000 UTC m=+1203.939642774" watchObservedRunningTime="2026-02-19 03:43:18.649642652 +0000 UTC m=+1203.946313263" Feb 19 03:43:18.791041 master-0 kubenswrapper[33867]: I0219 03:43:18.787854 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:18.799339 master-0 kubenswrapper[33867]: I0219 03:43:18.799287 33867 scope.go:117] "RemoveContainer" containerID="0c189e70ec793f01884591ae45b72258d58877883b555cf18a0b2c67b43b4a68" Feb 19 03:43:18.812406 master-0 kubenswrapper[33867]: I0219 03:43:18.810855 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:18.822898 master-0 kubenswrapper[33867]: I0219 03:43:18.822818 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:18.823546 master-0 kubenswrapper[33867]: E0219 03:43:18.823520 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-api" Feb 19 03:43:18.823546 master-0 kubenswrapper[33867]: I0219 03:43:18.823543 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-api" Feb 19 03:43:18.823717 master-0 kubenswrapper[33867]: E0219 03:43:18.823590 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-log" Feb 19 03:43:18.823717 master-0 kubenswrapper[33867]: I0219 03:43:18.823597 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-log" Feb 19 03:43:18.823857 master-0 kubenswrapper[33867]: I0219 03:43:18.823824 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-api" Feb 19 03:43:18.823857 master-0 kubenswrapper[33867]: I0219 03:43:18.823845 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" containerName="nova-api-log" Feb 19 03:43:18.825624 master-0 kubenswrapper[33867]: I0219 03:43:18.825601 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:18.829593 master-0 kubenswrapper[33867]: I0219 03:43:18.828350 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 03:43:18.837661 master-0 kubenswrapper[33867]: I0219 03:43:18.837619 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:18.969483 master-0 kubenswrapper[33867]: I0219 03:43:18.969432 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb12d48-8381-49f0-a9c1-7cc46b857a0a" path="/var/lib/kubelet/pods/eeb12d48-8381-49f0-a9c1-7cc46b857a0a/volumes" Feb 19 03:43:18.996585 master-0 kubenswrapper[33867]: I0219 03:43:18.996515 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rg7\" (UniqueName: \"kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:18.996911 master-0 kubenswrapper[33867]: I0219 03:43:18.996893 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:18.997449 master-0 kubenswrapper[33867]: I0219 03:43:18.997397 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:18.998007 master-0 kubenswrapper[33867]: I0219 03:43:18.997950 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.055538 master-0 kubenswrapper[33867]: E0219 03:43:19.055192 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb12d48_8381_49f0_a9c1_7cc46b857a0a.slice/crio-55ded175ca188bacd7ef5670ac4a90b61a1221935660e6b7416447806b5c8475\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb12d48_8381_49f0_a9c1_7cc46b857a0a.slice\": RecentStats: unable to find data in memory cache]" Feb 19 03:43:19.100695 master-0 kubenswrapper[33867]: I0219 03:43:19.100606 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.102561 master-0 kubenswrapper[33867]: I0219 03:43:19.101061 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rg7\" (UniqueName: \"kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.102561 master-0 kubenswrapper[33867]: I0219 03:43:19.101109 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.102561 master-0 kubenswrapper[33867]: I0219 03:43:19.101512 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.102750 master-0 kubenswrapper[33867]: I0219 03:43:19.102576 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.106452 master-0 kubenswrapper[33867]: I0219 03:43:19.106336 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.107267 master-0 kubenswrapper[33867]: I0219 03:43:19.107160 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.118830 master-0 kubenswrapper[33867]: I0219 03:43:19.118760 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rg7\" (UniqueName: \"kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7\") pod \"nova-api-0\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " pod="openstack/nova-api-0" Feb 19 03:43:19.162978 master-0 kubenswrapper[33867]: I0219 03:43:19.162875 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:19.207868 master-0 kubenswrapper[33867]: I0219 03:43:19.207832 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:19.407553 master-0 kubenswrapper[33867]: I0219 03:43:19.407485 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data\") pod \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " Feb 19 03:43:19.407811 master-0 kubenswrapper[33867]: I0219 03:43:19.407787 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle\") pod \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " Feb 19 03:43:19.407867 master-0 kubenswrapper[33867]: I0219 03:43:19.407850 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zv9s\" (UniqueName: \"kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s\") pod \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\" (UID: \"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566\") " Feb 19 03:43:19.415789 master-0 kubenswrapper[33867]: I0219 03:43:19.415702 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s" (OuterVolumeSpecName: "kube-api-access-2zv9s") pod "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" (UID: "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566"). InnerVolumeSpecName "kube-api-access-2zv9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:19.440560 master-0 kubenswrapper[33867]: I0219 03:43:19.440487 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" (UID: "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:19.446507 master-0 kubenswrapper[33867]: I0219 03:43:19.446418 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data" (OuterVolumeSpecName: "config-data") pod "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" (UID: "3eee2d2a-42ff-4d2b-b8bd-1b943bc34566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:19.515004 master-0 kubenswrapper[33867]: I0219 03:43:19.514054 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:19.515004 master-0 kubenswrapper[33867]: I0219 03:43:19.514105 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zv9s\" (UniqueName: \"kubernetes.io/projected/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-kube-api-access-2zv9s\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:19.515004 master-0 kubenswrapper[33867]: I0219 03:43:19.514118 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:19.614457 master-0 kubenswrapper[33867]: I0219 03:43:19.614248 33867 generic.go:334] "Generic (PLEG): container finished" podID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" exitCode=0 Feb 19 03:43:19.615801 master-0 kubenswrapper[33867]: I0219 03:43:19.615762 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:19.619294 master-0 kubenswrapper[33867]: I0219 03:43:19.619226 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566","Type":"ContainerDied","Data":"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9"} Feb 19 03:43:19.619398 master-0 kubenswrapper[33867]: I0219 03:43:19.619302 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eee2d2a-42ff-4d2b-b8bd-1b943bc34566","Type":"ContainerDied","Data":"089c9218424172930a2368694742883f34cf82a2f07f8dd7a69ca346c884fb57"} Feb 19 03:43:19.619398 master-0 kubenswrapper[33867]: I0219 03:43:19.619334 33867 scope.go:117] "RemoveContainer" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" Feb 19 03:43:19.669798 master-0 kubenswrapper[33867]: I0219 03:43:19.669009 33867 scope.go:117] "RemoveContainer" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" Feb 19 03:43:19.686677 master-0 kubenswrapper[33867]: I0219 03:43:19.684857 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:19.686677 master-0 kubenswrapper[33867]: E0219 03:43:19.685021 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9\": container with ID starting with 6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9 not found: ID does not exist" containerID="6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9" Feb 19 03:43:19.686677 master-0 kubenswrapper[33867]: I0219 03:43:19.685102 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9"} err="failed to get container status \"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9\": rpc error: code = NotFound desc = could not find container \"6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9\": container with ID starting with 6ff5162c7122ad45ce485dd9dfde12fb1fe169fa0cc9a90835afa21f206837c9 not found: ID does not exist" Feb 19 03:43:19.702745 master-0 kubenswrapper[33867]: I0219 03:43:19.702388 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:19.716410 master-0 kubenswrapper[33867]: I0219 03:43:19.716342 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:19.736184 master-0 kubenswrapper[33867]: I0219 03:43:19.736058 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:19.736594 master-0 kubenswrapper[33867]: E0219 03:43:19.736563 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerName="nova-scheduler-scheduler" Feb 19 03:43:19.736594 master-0 kubenswrapper[33867]: I0219 03:43:19.736579 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerName="nova-scheduler-scheduler" Feb 19 03:43:19.736951 master-0 kubenswrapper[33867]: I0219 03:43:19.736922 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" containerName="nova-scheduler-scheduler" Feb 19 03:43:19.737813 master-0 kubenswrapper[33867]: I0219 03:43:19.737776 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:19.740524 master-0 kubenswrapper[33867]: I0219 03:43:19.740475 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 03:43:19.754361 master-0 kubenswrapper[33867]: I0219 03:43:19.750598 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:19.894280 master-0 kubenswrapper[33867]: I0219 03:43:19.893573 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 19 03:43:19.940284 master-0 kubenswrapper[33867]: I0219 03:43:19.936042 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:19.940284 master-0 kubenswrapper[33867]: I0219 03:43:19.936159 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:19.940284 master-0 kubenswrapper[33867]: I0219 03:43:19.936210 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcc25\" (UniqueName: \"kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.043282 master-0 kubenswrapper[33867]: I0219 03:43:20.040524 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.043282 master-0 kubenswrapper[33867]: I0219 03:43:20.040602 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.043282 master-0 kubenswrapper[33867]: I0219 03:43:20.040631 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcc25\" (UniqueName: \"kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.057274 master-0 kubenswrapper[33867]: I0219 03:43:20.052909 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.066283 master-0 kubenswrapper[33867]: I0219 03:43:20.059832 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.079031 master-0 kubenswrapper[33867]: I0219 03:43:20.078992 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcc25\" (UniqueName: \"kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25\") pod \"nova-scheduler-0\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:20.085168 master-0 kubenswrapper[33867]: I0219 03:43:20.085106 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:20.596888 master-0 kubenswrapper[33867]: I0219 03:43:20.596820 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:20.636511 master-0 kubenswrapper[33867]: I0219 03:43:20.636404 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6319ca32-f7b0-458a-8fe3-137c7aa4254a","Type":"ContainerStarted","Data":"a5f13fa64e53eb49f77ead02e52fa4811da0bdf008204b3a83595054258ffc25"} Feb 19 03:43:20.640536 master-0 kubenswrapper[33867]: I0219 03:43:20.639452 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerStarted","Data":"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967"} Feb 19 03:43:20.640536 master-0 kubenswrapper[33867]: I0219 03:43:20.639514 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerStarted","Data":"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f"} Feb 19 03:43:20.640536 master-0 kubenswrapper[33867]: I0219 03:43:20.639527 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerStarted","Data":"892340c1b93b5151382850d10c2353ab6aaab9890d2c1be36f8b979d97c84787"} Feb 19 03:43:20.683862 master-0 kubenswrapper[33867]: I0219 03:43:20.683693 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.683667093 podStartE2EDuration="2.683667093s" podCreationTimestamp="2026-02-19 03:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:20.682241403 +0000 UTC m=+1205.978912014" watchObservedRunningTime="2026-02-19 03:43:20.683667093 +0000 UTC m=+1205.980337694" Feb 19 03:43:20.991322 master-0 kubenswrapper[33867]: I0219 03:43:20.991249 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eee2d2a-42ff-4d2b-b8bd-1b943bc34566" path="/var/lib/kubelet/pods/3eee2d2a-42ff-4d2b-b8bd-1b943bc34566/volumes" Feb 19 03:43:21.114309 master-0 kubenswrapper[33867]: I0219 03:43:21.114242 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:43:21.125690 master-0 kubenswrapper[33867]: I0219 03:43:21.125596 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:43:21.532174 master-0 kubenswrapper[33867]: I0219 03:43:21.532023 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 19 03:43:21.653816 master-0 kubenswrapper[33867]: I0219 03:43:21.653741 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6319ca32-f7b0-458a-8fe3-137c7aa4254a","Type":"ContainerStarted","Data":"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085"} Feb 19 03:43:21.711283 master-0 kubenswrapper[33867]: I0219 03:43:21.710695 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 19 03:43:21.742847 master-0 kubenswrapper[33867]: I0219 03:43:21.742479 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.742448626 podStartE2EDuration="2.742448626s" podCreationTimestamp="2026-02-19 03:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:21.725371042 +0000 UTC m=+1207.022041653" watchObservedRunningTime="2026-02-19 03:43:21.742448626 +0000 UTC m=+1207.039119247" Feb 19 03:43:22.670679 master-0 kubenswrapper[33867]: I0219 03:43:22.670575 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 19 03:43:24.978840 master-0 kubenswrapper[33867]: I0219 03:43:24.978732 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 03:43:25.086110 master-0 kubenswrapper[33867]: I0219 03:43:25.086024 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 03:43:26.115015 master-0 kubenswrapper[33867]: I0219 03:43:26.114928 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 03:43:26.115015 master-0 kubenswrapper[33867]: I0219 03:43:26.115008 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 03:43:27.133884 master-0 kubenswrapper[33867]: I0219 03:43:27.133615 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:27.135016 master-0 kubenswrapper[33867]: I0219 03:43:27.133792 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:29.173350 master-0 kubenswrapper[33867]: I0219 03:43:29.173272 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:43:29.173350 master-0 kubenswrapper[33867]: I0219 03:43:29.173352 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:43:30.086059 master-0 kubenswrapper[33867]: I0219 03:43:30.085985 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 03:43:30.117492 master-0 kubenswrapper[33867]: I0219 03:43:30.117440 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 03:43:30.255513 master-0 kubenswrapper[33867]: I0219 03:43:30.255426 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.17:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:30.256508 master-0 kubenswrapper[33867]: I0219 03:43:30.255426 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.17:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 03:43:30.807004 master-0 kubenswrapper[33867]: I0219 03:43:30.806942 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 03:43:32.629621 master-0 kubenswrapper[33867]: I0219 03:43:32.629552 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:32.718834 master-0 kubenswrapper[33867]: I0219 03:43:32.718751 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data\") pod \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " Feb 19 03:43:32.719169 master-0 kubenswrapper[33867]: I0219 03:43:32.718902 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle\") pod \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " Feb 19 03:43:32.719169 master-0 kubenswrapper[33867]: I0219 03:43:32.719041 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5zhf\" (UniqueName: \"kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf\") pod \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\" (UID: \"4fe3361f-a6c5-4180-b26f-03763a4c8db6\") " Feb 19 03:43:32.726211 master-0 kubenswrapper[33867]: I0219 03:43:32.726120 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf" (OuterVolumeSpecName: "kube-api-access-j5zhf") pod "4fe3361f-a6c5-4180-b26f-03763a4c8db6" (UID: "4fe3361f-a6c5-4180-b26f-03763a4c8db6"). InnerVolumeSpecName "kube-api-access-j5zhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:32.762360 master-0 kubenswrapper[33867]: I0219 03:43:32.762129 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data" (OuterVolumeSpecName: "config-data") pod "4fe3361f-a6c5-4180-b26f-03763a4c8db6" (UID: "4fe3361f-a6c5-4180-b26f-03763a4c8db6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:32.763781 master-0 kubenswrapper[33867]: I0219 03:43:32.763709 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe3361f-a6c5-4180-b26f-03763a4c8db6" (UID: "4fe3361f-a6c5-4180-b26f-03763a4c8db6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:32.801457 master-0 kubenswrapper[33867]: I0219 03:43:32.801349 33867 generic.go:334] "Generic (PLEG): container finished" podID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" containerID="01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8" exitCode=137 Feb 19 03:43:32.801457 master-0 kubenswrapper[33867]: I0219 03:43:32.801436 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:32.801966 master-0 kubenswrapper[33867]: I0219 03:43:32.801438 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4fe3361f-a6c5-4180-b26f-03763a4c8db6","Type":"ContainerDied","Data":"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8"} Feb 19 03:43:32.801966 master-0 kubenswrapper[33867]: I0219 03:43:32.801548 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4fe3361f-a6c5-4180-b26f-03763a4c8db6","Type":"ContainerDied","Data":"f27dc9b15b5203b75b4dbebca8152c85cb4ede867b0640b52fb50d9dc55cc724"} Feb 19 03:43:32.801966 master-0 kubenswrapper[33867]: I0219 03:43:32.801586 33867 scope.go:117] "RemoveContainer" containerID="01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8" Feb 19 03:43:32.826681 master-0 kubenswrapper[33867]: I0219 03:43:32.826596 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5zhf\" (UniqueName: \"kubernetes.io/projected/4fe3361f-a6c5-4180-b26f-03763a4c8db6-kube-api-access-j5zhf\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:32.826681 master-0 kubenswrapper[33867]: I0219 03:43:32.826652 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:32.826681 master-0 kubenswrapper[33867]: I0219 03:43:32.826666 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe3361f-a6c5-4180-b26f-03763a4c8db6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:32.866062 master-0 kubenswrapper[33867]: I0219 03:43:32.865937 33867 scope.go:117] "RemoveContainer" containerID="01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8" Feb 19 03:43:32.866716 master-0 kubenswrapper[33867]: E0219 03:43:32.866669 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8\": container with ID starting with 01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8 not found: ID does not exist" containerID="01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8" Feb 19 03:43:32.866832 master-0 kubenswrapper[33867]: I0219 03:43:32.866734 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8"} err="failed to get container status \"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8\": rpc error: code = NotFound desc = could not find container \"01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8\": container with ID starting with 01e35f1d07d270f66c6c48a2118a19041329a5a11b7bc6bc5d73505e06028be8 not found: ID does not exist" Feb 19 03:43:32.870280 master-0 kubenswrapper[33867]: I0219 03:43:32.869396 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:32.891372 master-0 kubenswrapper[33867]: I0219 03:43:32.891293 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:32.906513 master-0 kubenswrapper[33867]: I0219 03:43:32.906444 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:32.907401 master-0 kubenswrapper[33867]: E0219 03:43:32.907125 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 03:43:32.907401 master-0 kubenswrapper[33867]: I0219 03:43:32.907152 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 03:43:32.907589 master-0 kubenswrapper[33867]: I0219 03:43:32.907556 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 03:43:32.909376 master-0 kubenswrapper[33867]: I0219 03:43:32.909332 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:32.911877 master-0 kubenswrapper[33867]: I0219 03:43:32.911830 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 03:43:32.912273 master-0 kubenswrapper[33867]: I0219 03:43:32.912211 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 03:43:32.914225 master-0 kubenswrapper[33867]: I0219 03:43:32.914183 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 03:43:32.923495 master-0 kubenswrapper[33867]: I0219 03:43:32.923425 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:32.972737 master-0 kubenswrapper[33867]: I0219 03:43:32.972630 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe3361f-a6c5-4180-b26f-03763a4c8db6" path="/var/lib/kubelet/pods/4fe3361f-a6c5-4180-b26f-03763a4c8db6/volumes" Feb 19 03:43:33.031905 master-0 kubenswrapper[33867]: I0219 03:43:33.031748 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.031905 master-0 kubenswrapper[33867]: I0219 03:43:33.031834 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.032832 master-0 kubenswrapper[33867]: I0219 03:43:33.031964 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.032832 master-0 kubenswrapper[33867]: I0219 03:43:33.032005 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fq9m\" (UniqueName: \"kubernetes.io/projected/7e06e99e-0862-48e2-b640-8fd02ed338dd-kube-api-access-9fq9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.032832 master-0 kubenswrapper[33867]: I0219 03:43:33.032122 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.134912 master-0 kubenswrapper[33867]: I0219 03:43:33.134838 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.134912 master-0 kubenswrapper[33867]: I0219 03:43:33.134910 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fq9m\" (UniqueName: \"kubernetes.io/projected/7e06e99e-0862-48e2-b640-8fd02ed338dd-kube-api-access-9fq9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.135219 master-0 kubenswrapper[33867]: I0219 03:43:33.135180 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.135717 master-0 kubenswrapper[33867]: I0219 03:43:33.135661 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.135832 master-0 kubenswrapper[33867]: I0219 03:43:33.135744 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.138525 master-0 kubenswrapper[33867]: I0219 03:43:33.138481 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.138731 master-0 kubenswrapper[33867]: I0219 03:43:33.138700 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.139547 master-0 kubenswrapper[33867]: I0219 03:43:33.139517 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.139809 master-0 kubenswrapper[33867]: I0219 03:43:33.139781 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e06e99e-0862-48e2-b640-8fd02ed338dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.152918 master-0 kubenswrapper[33867]: I0219 03:43:33.152877 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fq9m\" (UniqueName: \"kubernetes.io/projected/7e06e99e-0862-48e2-b640-8fd02ed338dd-kube-api-access-9fq9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"7e06e99e-0862-48e2-b640-8fd02ed338dd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.249753 master-0 kubenswrapper[33867]: I0219 03:43:33.249687 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:33.766012 master-0 kubenswrapper[33867]: I0219 03:43:33.765795 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 03:43:33.815375 master-0 kubenswrapper[33867]: I0219 03:43:33.815310 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7e06e99e-0862-48e2-b640-8fd02ed338dd","Type":"ContainerStarted","Data":"ddf80262e435b3f42f869d9a1aff1cb4db66c585a363da213736624acf1b8817"} Feb 19 03:43:34.839980 master-0 kubenswrapper[33867]: I0219 03:43:34.839914 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7e06e99e-0862-48e2-b640-8fd02ed338dd","Type":"ContainerStarted","Data":"6bc3bb32789af9bb5b4b4bda879dc8b70d27c7907fcca69b00d0d7faf5089bdc"} Feb 19 03:43:34.886268 master-0 kubenswrapper[33867]: I0219 03:43:34.886159 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.886130318 podStartE2EDuration="2.886130318s" podCreationTimestamp="2026-02-19 03:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:34.861906832 +0000 UTC m=+1220.158577463" watchObservedRunningTime="2026-02-19 03:43:34.886130318 +0000 UTC m=+1220.182800929" Feb 19 03:43:36.122017 master-0 kubenswrapper[33867]: I0219 03:43:36.121919 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 03:43:36.129213 master-0 kubenswrapper[33867]: I0219 03:43:36.129098 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 03:43:36.133958 master-0 kubenswrapper[33867]: I0219 03:43:36.133889 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 03:43:36.880401 master-0 kubenswrapper[33867]: I0219 03:43:36.880344 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 03:43:38.250808 master-0 kubenswrapper[33867]: I0219 03:43:38.250698 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:39.172551 master-0 kubenswrapper[33867]: I0219 03:43:39.172467 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 03:43:39.173228 master-0 kubenswrapper[33867]: I0219 03:43:39.173180 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 03:43:39.173403 master-0 kubenswrapper[33867]: I0219 03:43:39.173374 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 03:43:39.177788 master-0 kubenswrapper[33867]: I0219 03:43:39.177759 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 03:43:39.907485 master-0 kubenswrapper[33867]: I0219 03:43:39.907401 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 03:43:39.911752 master-0 kubenswrapper[33867]: I0219 03:43:39.911705 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 03:43:40.173564 master-0 kubenswrapper[33867]: I0219 03:43:40.173338 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7587d49f7f-lcx7j"] Feb 19 03:43:40.187915 master-0 kubenswrapper[33867]: I0219 03:43:40.187825 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.205505 master-0 kubenswrapper[33867]: I0219 03:43:40.205230 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d49f7f-lcx7j"] Feb 19 03:43:40.277247 master-0 kubenswrapper[33867]: I0219 03:43:40.277150 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.277593 master-0 kubenswrapper[33867]: I0219 03:43:40.277495 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q8gh\" (UniqueName: \"kubernetes.io/projected/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-kube-api-access-2q8gh\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.277670 master-0 kubenswrapper[33867]: I0219 03:43:40.277628 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.277726 master-0 kubenswrapper[33867]: I0219 03:43:40.277707 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-svc\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.278089 master-0 kubenswrapper[33867]: I0219 03:43:40.278049 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.278515 master-0 kubenswrapper[33867]: I0219 03:43:40.278112 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-config\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380062 master-0 kubenswrapper[33867]: I0219 03:43:40.379991 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-svc\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380346 master-0 kubenswrapper[33867]: I0219 03:43:40.380131 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380346 master-0 kubenswrapper[33867]: I0219 03:43:40.380175 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-config\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380475 master-0 kubenswrapper[33867]: I0219 03:43:40.380361 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380475 master-0 kubenswrapper[33867]: I0219 03:43:40.380405 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q8gh\" (UniqueName: \"kubernetes.io/projected/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-kube-api-access-2q8gh\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.380584 master-0 kubenswrapper[33867]: I0219 03:43:40.380514 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.382006 master-0 kubenswrapper[33867]: I0219 03:43:40.381959 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.382400 master-0 kubenswrapper[33867]: I0219 03:43:40.382356 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-config\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.383163 master-0 kubenswrapper[33867]: I0219 03:43:40.383109 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-dns-svc\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.383926 master-0 kubenswrapper[33867]: I0219 03:43:40.383866 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.383992 master-0 kubenswrapper[33867]: I0219 03:43:40.383890 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.412952 master-0 kubenswrapper[33867]: I0219 03:43:40.412881 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q8gh\" (UniqueName: \"kubernetes.io/projected/2d51ba3f-9ce6-49b9-a314-7d212c55ff8e-kube-api-access-2q8gh\") pod \"dnsmasq-dns-7587d49f7f-lcx7j\" (UID: \"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e\") " pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:40.529919 master-0 kubenswrapper[33867]: I0219 03:43:40.529754 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:41.264683 master-0 kubenswrapper[33867]: I0219 03:43:41.264600 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d49f7f-lcx7j"] Feb 19 03:43:41.952096 master-0 kubenswrapper[33867]: I0219 03:43:41.949335 33867 generic.go:334] "Generic (PLEG): container finished" podID="2d51ba3f-9ce6-49b9-a314-7d212c55ff8e" containerID="b1fe1d03c98ba0d60f08e1a7c4a29514e68ca979005252a260dfd115700e0909" exitCode=0 Feb 19 03:43:41.952096 master-0 kubenswrapper[33867]: I0219 03:43:41.951626 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" event={"ID":"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e","Type":"ContainerDied","Data":"b1fe1d03c98ba0d60f08e1a7c4a29514e68ca979005252a260dfd115700e0909"} Feb 19 03:43:41.952096 master-0 kubenswrapper[33867]: I0219 03:43:41.951664 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" event={"ID":"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e","Type":"ContainerStarted","Data":"daf369b50f9dc8a2917ab455224fba97e17a65b6dc17fe8788059678e7e4c083"} Feb 19 03:43:42.969599 master-0 kubenswrapper[33867]: I0219 03:43:42.969522 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:42.969599 master-0 kubenswrapper[33867]: I0219 03:43:42.969593 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" event={"ID":"2d51ba3f-9ce6-49b9-a314-7d212c55ff8e","Type":"ContainerStarted","Data":"25118b1b01a91f10e0d4c8964e2ac5ef8cfd4e7ff5b86ae8eb34c0a0052bf027"} Feb 19 03:43:42.992427 master-0 kubenswrapper[33867]: I0219 03:43:42.992335 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" podStartSLOduration=2.9923169339999998 podStartE2EDuration="2.992316934s" podCreationTimestamp="2026-02-19 03:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:42.989328269 +0000 UTC m=+1228.285998900" watchObservedRunningTime="2026-02-19 03:43:42.992316934 +0000 UTC m=+1228.288987535" Feb 19 03:43:43.212943 master-0 kubenswrapper[33867]: I0219 03:43:43.212874 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:43.213696 master-0 kubenswrapper[33867]: I0219 03:43:43.213636 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-log" containerID="cri-o://4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f" gracePeriod=30 Feb 19 03:43:43.216120 master-0 kubenswrapper[33867]: I0219 03:43:43.213778 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-api" containerID="cri-o://edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967" gracePeriod=30 Feb 19 03:43:43.250079 master-0 kubenswrapper[33867]: I0219 03:43:43.250012 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:43.272991 master-0 kubenswrapper[33867]: I0219 03:43:43.272930 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:43.985486 master-0 kubenswrapper[33867]: I0219 03:43:43.985421 33867 generic.go:334] "Generic (PLEG): container finished" podID="cd9875f6-a014-415a-b136-4a87ca41c168" containerID="4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f" exitCode=143 Feb 19 03:43:43.987607 master-0 kubenswrapper[33867]: I0219 03:43:43.987553 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerDied","Data":"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f"} Feb 19 03:43:44.006608 master-0 kubenswrapper[33867]: I0219 03:43:44.006516 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 03:43:44.270513 master-0 kubenswrapper[33867]: I0219 03:43:44.270340 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bhrf8"] Feb 19 03:43:44.273729 master-0 kubenswrapper[33867]: I0219 03:43:44.273107 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.275698 master-0 kubenswrapper[33867]: I0219 03:43:44.275537 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 03:43:44.277269 master-0 kubenswrapper[33867]: I0219 03:43:44.277219 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 03:43:44.316503 master-0 kubenswrapper[33867]: I0219 03:43:44.315629 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-x6cl9"] Feb 19 03:43:44.318038 master-0 kubenswrapper[33867]: I0219 03:43:44.318010 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.330457 master-0 kubenswrapper[33867]: I0219 03:43:44.330317 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhrf8"] Feb 19 03:43:44.348114 master-0 kubenswrapper[33867]: I0219 03:43:44.348038 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-x6cl9"] Feb 19 03:43:44.404579 master-0 kubenswrapper[33867]: I0219 03:43:44.404507 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.404579 master-0 kubenswrapper[33867]: I0219 03:43:44.404572 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.404921 master-0 kubenswrapper[33867]: I0219 03:43:44.404855 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvb98\" (UniqueName: \"kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.405228 master-0 kubenswrapper[33867]: I0219 03:43:44.405201 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865n9\" (UniqueName: \"kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.405396 master-0 kubenswrapper[33867]: I0219 03:43:44.405356 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.405547 master-0 kubenswrapper[33867]: I0219 03:43:44.405526 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.405635 master-0 kubenswrapper[33867]: I0219 03:43:44.405616 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.405835 master-0 kubenswrapper[33867]: I0219 03:43:44.405800 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.508883 master-0 kubenswrapper[33867]: I0219 03:43:44.508797 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-865n9\" (UniqueName: \"kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.509117 master-0 kubenswrapper[33867]: I0219 03:43:44.508897 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.509117 master-0 kubenswrapper[33867]: I0219 03:43:44.508959 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.509368 master-0 kubenswrapper[33867]: I0219 03:43:44.509298 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.509623 master-0 kubenswrapper[33867]: I0219 03:43:44.509585 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.509850 master-0 kubenswrapper[33867]: I0219 03:43:44.509804 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.510013 master-0 kubenswrapper[33867]: I0219 03:43:44.509980 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.510139 master-0 kubenswrapper[33867]: I0219 03:43:44.510115 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvb98\" (UniqueName: \"kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.513965 master-0 kubenswrapper[33867]: I0219 03:43:44.513749 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.513965 master-0 kubenswrapper[33867]: I0219 03:43:44.513761 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.514321 master-0 kubenswrapper[33867]: I0219 03:43:44.514185 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.517378 master-0 kubenswrapper[33867]: I0219 03:43:44.516490 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.517378 master-0 kubenswrapper[33867]: I0219 03:43:44.516742 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.522791 master-0 kubenswrapper[33867]: I0219 03:43:44.522683 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.529921 master-0 kubenswrapper[33867]: I0219 03:43:44.529727 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-865n9\" (UniqueName: \"kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9\") pod \"nova-cell1-cell-mapping-bhrf8\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.542011 master-0 kubenswrapper[33867]: I0219 03:43:44.541951 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvb98\" (UniqueName: \"kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98\") pod \"nova-cell1-host-discover-x6cl9\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:44.609895 master-0 kubenswrapper[33867]: I0219 03:43:44.609803 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:44.646383 master-0 kubenswrapper[33867]: I0219 03:43:44.646315 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:45.133283 master-0 kubenswrapper[33867]: I0219 03:43:45.133201 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhrf8"] Feb 19 03:43:45.309180 master-0 kubenswrapper[33867]: I0219 03:43:45.309103 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-x6cl9"] Feb 19 03:43:46.061585 master-0 kubenswrapper[33867]: I0219 03:43:46.052650 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x6cl9" event={"ID":"f2540117-66a4-4bde-80ce-e1c15c51b076","Type":"ContainerStarted","Data":"972c1b384cd84e7e036a42b112d4ec39ba13e359aa04f55f923d9cc8f8e22ad6"} Feb 19 03:43:46.061585 master-0 kubenswrapper[33867]: I0219 03:43:46.052721 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x6cl9" event={"ID":"f2540117-66a4-4bde-80ce-e1c15c51b076","Type":"ContainerStarted","Data":"4816b44b1d020d4444721d7231508b174aefe41761a0ebb5f629de1ec43fbb62"} Feb 19 03:43:46.061585 master-0 kubenswrapper[33867]: I0219 03:43:46.057813 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhrf8" event={"ID":"d72d9962-afbf-436d-9250-37b6ae7f252d","Type":"ContainerStarted","Data":"d45ee3fba32f135b55d03d03520c7c53e77d331357ff2f4091088182cc20afee"} Feb 19 03:43:46.061585 master-0 kubenswrapper[33867]: I0219 03:43:46.057879 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhrf8" event={"ID":"d72d9962-afbf-436d-9250-37b6ae7f252d","Type":"ContainerStarted","Data":"c1d0d745278071699bbd9cf3835cc5cdf4ce3855757b45500baf71510d3bc1ab"} Feb 19 03:43:46.118037 master-0 kubenswrapper[33867]: I0219 03:43:46.117937 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-x6cl9" podStartSLOduration=2.117916857 podStartE2EDuration="2.117916857s" podCreationTimestamp="2026-02-19 03:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:46.097082867 +0000 UTC m=+1231.393753498" watchObservedRunningTime="2026-02-19 03:43:46.117916857 +0000 UTC m=+1231.414587468" Feb 19 03:43:46.122572 master-0 kubenswrapper[33867]: I0219 03:43:46.122316 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bhrf8" podStartSLOduration=2.122291621 podStartE2EDuration="2.122291621s" podCreationTimestamp="2026-02-19 03:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:46.115000305 +0000 UTC m=+1231.411670916" watchObservedRunningTime="2026-02-19 03:43:46.122291621 +0000 UTC m=+1231.418962252" Feb 19 03:43:46.940853 master-0 kubenswrapper[33867]: I0219 03:43:46.940808 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:46.999829 master-0 kubenswrapper[33867]: I0219 03:43:46.999680 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data\") pod \"cd9875f6-a014-415a-b136-4a87ca41c168\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " Feb 19 03:43:46.999829 master-0 kubenswrapper[33867]: I0219 03:43:46.999822 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle\") pod \"cd9875f6-a014-415a-b136-4a87ca41c168\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " Feb 19 03:43:47.000100 master-0 kubenswrapper[33867]: I0219 03:43:47.000059 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs\") pod \"cd9875f6-a014-415a-b136-4a87ca41c168\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " Feb 19 03:43:47.000190 master-0 kubenswrapper[33867]: I0219 03:43:47.000167 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8rg7\" (UniqueName: \"kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7\") pod \"cd9875f6-a014-415a-b136-4a87ca41c168\" (UID: \"cd9875f6-a014-415a-b136-4a87ca41c168\") " Feb 19 03:43:47.005756 master-0 kubenswrapper[33867]: I0219 03:43:47.005463 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7" (OuterVolumeSpecName: "kube-api-access-w8rg7") pod "cd9875f6-a014-415a-b136-4a87ca41c168" (UID: "cd9875f6-a014-415a-b136-4a87ca41c168"). InnerVolumeSpecName "kube-api-access-w8rg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:47.005955 master-0 kubenswrapper[33867]: I0219 03:43:47.005853 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs" (OuterVolumeSpecName: "logs") pod "cd9875f6-a014-415a-b136-4a87ca41c168" (UID: "cd9875f6-a014-415a-b136-4a87ca41c168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:47.039579 master-0 kubenswrapper[33867]: I0219 03:43:47.039509 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd9875f6-a014-415a-b136-4a87ca41c168" (UID: "cd9875f6-a014-415a-b136-4a87ca41c168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:47.069834 master-0 kubenswrapper[33867]: I0219 03:43:47.069676 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data" (OuterVolumeSpecName: "config-data") pod "cd9875f6-a014-415a-b136-4a87ca41c168" (UID: "cd9875f6-a014-415a-b136-4a87ca41c168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:47.075167 master-0 kubenswrapper[33867]: I0219 03:43:47.075032 33867 generic.go:334] "Generic (PLEG): container finished" podID="cd9875f6-a014-415a-b136-4a87ca41c168" containerID="edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967" exitCode=0 Feb 19 03:43:47.077533 master-0 kubenswrapper[33867]: I0219 03:43:47.077489 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:47.077989 master-0 kubenswrapper[33867]: I0219 03:43:47.077888 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerDied","Data":"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967"} Feb 19 03:43:47.078823 master-0 kubenswrapper[33867]: I0219 03:43:47.078792 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd9875f6-a014-415a-b136-4a87ca41c168","Type":"ContainerDied","Data":"892340c1b93b5151382850d10c2353ab6aaab9890d2c1be36f8b979d97c84787"} Feb 19 03:43:47.078899 master-0 kubenswrapper[33867]: I0219 03:43:47.078831 33867 scope.go:117] "RemoveContainer" containerID="edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967" Feb 19 03:43:47.109589 master-0 kubenswrapper[33867]: I0219 03:43:47.109509 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:47.109589 master-0 kubenswrapper[33867]: I0219 03:43:47.109580 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9875f6-a014-415a-b136-4a87ca41c168-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:47.109589 master-0 kubenswrapper[33867]: I0219 03:43:47.109601 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd9875f6-a014-415a-b136-4a87ca41c168-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:47.109975 master-0 kubenswrapper[33867]: I0219 03:43:47.109622 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8rg7\" (UniqueName: \"kubernetes.io/projected/cd9875f6-a014-415a-b136-4a87ca41c168-kube-api-access-w8rg7\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:47.193274 master-0 kubenswrapper[33867]: I0219 03:43:47.193219 33867 scope.go:117] "RemoveContainer" containerID="4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f" Feb 19 03:43:47.196036 master-0 kubenswrapper[33867]: I0219 03:43:47.195979 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:47.230559 master-0 kubenswrapper[33867]: I0219 03:43:47.230494 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:47.254711 master-0 kubenswrapper[33867]: I0219 03:43:47.254655 33867 scope.go:117] "RemoveContainer" containerID="edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967" Feb 19 03:43:47.255159 master-0 kubenswrapper[33867]: E0219 03:43:47.255123 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967\": container with ID starting with edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967 not found: ID does not exist" containerID="edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967" Feb 19 03:43:47.255242 master-0 kubenswrapper[33867]: I0219 03:43:47.255170 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967"} err="failed to get container status \"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967\": rpc error: code = NotFound desc = could not find container \"edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967\": container with ID starting with edf6f899966c1a6a93b4547e8d9bae59b8696bb1f9f8e3535448e112d4463967 not found: ID does not exist" Feb 19 03:43:47.255242 master-0 kubenswrapper[33867]: I0219 03:43:47.255201 33867 scope.go:117] "RemoveContainer" containerID="4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f" Feb 19 03:43:47.255702 master-0 kubenswrapper[33867]: E0219 03:43:47.255680 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f\": container with ID starting with 4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f not found: ID does not exist" containerID="4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f" Feb 19 03:43:47.255777 master-0 kubenswrapper[33867]: I0219 03:43:47.255707 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f"} err="failed to get container status \"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f\": rpc error: code = NotFound desc = could not find container \"4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f\": container with ID starting with 4810af21c7c6e588faa7ce672711c654cdbd7178b2fad17001d3e0b151bdd24f not found: ID does not exist" Feb 19 03:43:47.267954 master-0 kubenswrapper[33867]: I0219 03:43:47.267898 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:47.268670 master-0 kubenswrapper[33867]: E0219 03:43:47.268614 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-log" Feb 19 03:43:47.268670 master-0 kubenswrapper[33867]: I0219 03:43:47.268643 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-log" Feb 19 03:43:47.268824 master-0 kubenswrapper[33867]: E0219 03:43:47.268679 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-api" Feb 19 03:43:47.268824 master-0 kubenswrapper[33867]: I0219 03:43:47.268689 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-api" Feb 19 03:43:47.269054 master-0 kubenswrapper[33867]: I0219 03:43:47.269025 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-api" Feb 19 03:43:47.269117 master-0 kubenswrapper[33867]: I0219 03:43:47.269062 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" containerName="nova-api-log" Feb 19 03:43:47.271100 master-0 kubenswrapper[33867]: I0219 03:43:47.271054 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:47.281834 master-0 kubenswrapper[33867]: I0219 03:43:47.279046 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 03:43:47.282729 master-0 kubenswrapper[33867]: I0219 03:43:47.280173 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 03:43:47.282830 master-0 kubenswrapper[33867]: I0219 03:43:47.280392 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 03:43:47.283760 master-0 kubenswrapper[33867]: I0219 03:43:47.283712 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:47.420128 master-0 kubenswrapper[33867]: I0219 03:43:47.420058 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.420128 master-0 kubenswrapper[33867]: I0219 03:43:47.420116 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.420128 master-0 kubenswrapper[33867]: I0219 03:43:47.420138 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.420493 master-0 kubenswrapper[33867]: I0219 03:43:47.420186 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfztd\" (UniqueName: \"kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.420557 master-0 kubenswrapper[33867]: I0219 03:43:47.420487 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.420704 master-0 kubenswrapper[33867]: I0219 03:43:47.420680 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.522956 master-0 kubenswrapper[33867]: I0219 03:43:47.522852 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfztd\" (UniqueName: \"kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.522956 master-0 kubenswrapper[33867]: I0219 03:43:47.522932 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.523473 master-0 kubenswrapper[33867]: I0219 03:43:47.523421 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.524267 master-0 kubenswrapper[33867]: I0219 03:43:47.524225 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.524415 master-0 kubenswrapper[33867]: I0219 03:43:47.524396 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.524537 master-0 kubenswrapper[33867]: I0219 03:43:47.524519 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.524854 master-0 kubenswrapper[33867]: I0219 03:43:47.524800 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.529499 master-0 kubenswrapper[33867]: I0219 03:43:47.529093 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.529631 master-0 kubenswrapper[33867]: I0219 03:43:47.529512 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.530082 master-0 kubenswrapper[33867]: I0219 03:43:47.530023 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.530379 master-0 kubenswrapper[33867]: I0219 03:43:47.530360 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.538532 master-0 kubenswrapper[33867]: I0219 03:43:47.538497 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfztd\" (UniqueName: \"kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd\") pod \"nova-api-0\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " pod="openstack/nova-api-0" Feb 19 03:43:47.602110 master-0 kubenswrapper[33867]: I0219 03:43:47.602053 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:48.102709 master-0 kubenswrapper[33867]: I0219 03:43:48.102630 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:48.975648 master-0 kubenswrapper[33867]: I0219 03:43:48.975514 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9875f6-a014-415a-b136-4a87ca41c168" path="/var/lib/kubelet/pods/cd9875f6-a014-415a-b136-4a87ca41c168/volumes" Feb 19 03:43:49.131501 master-0 kubenswrapper[33867]: I0219 03:43:49.131419 33867 generic.go:334] "Generic (PLEG): container finished" podID="f2540117-66a4-4bde-80ce-e1c15c51b076" containerID="972c1b384cd84e7e036a42b112d4ec39ba13e359aa04f55f923d9cc8f8e22ad6" exitCode=0 Feb 19 03:43:49.132172 master-0 kubenswrapper[33867]: I0219 03:43:49.131525 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x6cl9" event={"ID":"f2540117-66a4-4bde-80ce-e1c15c51b076","Type":"ContainerDied","Data":"972c1b384cd84e7e036a42b112d4ec39ba13e359aa04f55f923d9cc8f8e22ad6"} Feb 19 03:43:49.135324 master-0 kubenswrapper[33867]: I0219 03:43:49.135135 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerStarted","Data":"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3"} Feb 19 03:43:49.135324 master-0 kubenswrapper[33867]: I0219 03:43:49.135195 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerStarted","Data":"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a"} Feb 19 03:43:49.135324 master-0 kubenswrapper[33867]: I0219 03:43:49.135218 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerStarted","Data":"3eb4a90b33991ee6f44c3721c7c29ecc6aa41ff7df9c25d8e356b3dc28b7d6d6"} Feb 19 03:43:49.194879 master-0 kubenswrapper[33867]: I0219 03:43:49.194746 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.194718978 podStartE2EDuration="2.194718978s" podCreationTimestamp="2026-02-19 03:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:49.178643913 +0000 UTC m=+1234.475314544" watchObservedRunningTime="2026-02-19 03:43:49.194718978 +0000 UTC m=+1234.491389589" Feb 19 03:43:50.532211 master-0 kubenswrapper[33867]: I0219 03:43:50.532138 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7587d49f7f-lcx7j" Feb 19 03:43:50.676442 master-0 kubenswrapper[33867]: I0219 03:43:50.675785 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:50.709972 master-0 kubenswrapper[33867]: I0219 03:43:50.700483 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:43:50.709972 master-0 kubenswrapper[33867]: I0219 03:43:50.700749 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="dnsmasq-dns" containerID="cri-o://3af8fae0acb961ada9ace29d2211091de753e4a86e10f0ea515b0f365b204645" gracePeriod=10 Feb 19 03:43:50.773566 master-0 kubenswrapper[33867]: I0219 03:43:50.773504 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data\") pod \"f2540117-66a4-4bde-80ce-e1c15c51b076\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " Feb 19 03:43:50.774000 master-0 kubenswrapper[33867]: I0219 03:43:50.773952 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle\") pod \"f2540117-66a4-4bde-80ce-e1c15c51b076\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " Feb 19 03:43:50.774131 master-0 kubenswrapper[33867]: I0219 03:43:50.774100 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts\") pod \"f2540117-66a4-4bde-80ce-e1c15c51b076\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " Feb 19 03:43:50.774767 master-0 kubenswrapper[33867]: I0219 03:43:50.774730 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvb98\" (UniqueName: \"kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98\") pod \"f2540117-66a4-4bde-80ce-e1c15c51b076\" (UID: \"f2540117-66a4-4bde-80ce-e1c15c51b076\") " Feb 19 03:43:50.780188 master-0 kubenswrapper[33867]: I0219 03:43:50.780127 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts" (OuterVolumeSpecName: "scripts") pod "f2540117-66a4-4bde-80ce-e1c15c51b076" (UID: "f2540117-66a4-4bde-80ce-e1c15c51b076"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:50.784035 master-0 kubenswrapper[33867]: I0219 03:43:50.783967 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98" (OuterVolumeSpecName: "kube-api-access-tvb98") pod "f2540117-66a4-4bde-80ce-e1c15c51b076" (UID: "f2540117-66a4-4bde-80ce-e1c15c51b076"). InnerVolumeSpecName "kube-api-access-tvb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:50.816786 master-0 kubenswrapper[33867]: I0219 03:43:50.816354 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2540117-66a4-4bde-80ce-e1c15c51b076" (UID: "f2540117-66a4-4bde-80ce-e1c15c51b076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:50.826466 master-0 kubenswrapper[33867]: I0219 03:43:50.826385 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data" (OuterVolumeSpecName: "config-data") pod "f2540117-66a4-4bde-80ce-e1c15c51b076" (UID: "f2540117-66a4-4bde-80ce-e1c15c51b076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:50.882962 master-0 kubenswrapper[33867]: I0219 03:43:50.882907 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:50.882962 master-0 kubenswrapper[33867]: I0219 03:43:50.882958 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvb98\" (UniqueName: \"kubernetes.io/projected/f2540117-66a4-4bde-80ce-e1c15c51b076-kube-api-access-tvb98\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:50.883397 master-0 kubenswrapper[33867]: I0219 03:43:50.882978 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:50.883397 master-0 kubenswrapper[33867]: I0219 03:43:50.882993 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2540117-66a4-4bde-80ce-e1c15c51b076-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.175037 master-0 kubenswrapper[33867]: I0219 03:43:51.174968 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x6cl9" event={"ID":"f2540117-66a4-4bde-80ce-e1c15c51b076","Type":"ContainerDied","Data":"4816b44b1d020d4444721d7231508b174aefe41761a0ebb5f629de1ec43fbb62"} Feb 19 03:43:51.175037 master-0 kubenswrapper[33867]: I0219 03:43:51.175031 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4816b44b1d020d4444721d7231508b174aefe41761a0ebb5f629de1ec43fbb62" Feb 19 03:43:51.175037 master-0 kubenswrapper[33867]: I0219 03:43:51.174996 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x6cl9" Feb 19 03:43:51.186366 master-0 kubenswrapper[33867]: I0219 03:43:51.186120 33867 generic.go:334] "Generic (PLEG): container finished" podID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerID="3af8fae0acb961ada9ace29d2211091de753e4a86e10f0ea515b0f365b204645" exitCode=0 Feb 19 03:43:51.186366 master-0 kubenswrapper[33867]: I0219 03:43:51.186316 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" event={"ID":"9ce13545-41e9-40c6-9719-6aff7d61041d","Type":"ContainerDied","Data":"3af8fae0acb961ada9ace29d2211091de753e4a86e10f0ea515b0f365b204645"} Feb 19 03:43:51.198060 master-0 kubenswrapper[33867]: I0219 03:43:51.191989 33867 generic.go:334] "Generic (PLEG): container finished" podID="d72d9962-afbf-436d-9250-37b6ae7f252d" containerID="d45ee3fba32f135b55d03d03520c7c53e77d331357ff2f4091088182cc20afee" exitCode=0 Feb 19 03:43:51.198060 master-0 kubenswrapper[33867]: I0219 03:43:51.192053 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhrf8" event={"ID":"d72d9962-afbf-436d-9250-37b6ae7f252d","Type":"ContainerDied","Data":"d45ee3fba32f135b55d03d03520c7c53e77d331357ff2f4091088182cc20afee"} Feb 19 03:43:51.344443 master-0 kubenswrapper[33867]: I0219 03:43:51.344396 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:43:51.504086 master-0 kubenswrapper[33867]: I0219 03:43:51.503915 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.504504 master-0 kubenswrapper[33867]: I0219 03:43:51.504128 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.504504 master-0 kubenswrapper[33867]: I0219 03:43:51.504376 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl4fw\" (UniqueName: \"kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.504504 master-0 kubenswrapper[33867]: I0219 03:43:51.504451 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.504743 master-0 kubenswrapper[33867]: I0219 03:43:51.504716 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.504859 master-0 kubenswrapper[33867]: I0219 03:43:51.504835 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0\") pod \"9ce13545-41e9-40c6-9719-6aff7d61041d\" (UID: \"9ce13545-41e9-40c6-9719-6aff7d61041d\") " Feb 19 03:43:51.508491 master-0 kubenswrapper[33867]: I0219 03:43:51.508425 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw" (OuterVolumeSpecName: "kube-api-access-fl4fw") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "kube-api-access-fl4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:51.579781 master-0 kubenswrapper[33867]: I0219 03:43:51.577959 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:51.580738 master-0 kubenswrapper[33867]: I0219 03:43:51.579847 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config" (OuterVolumeSpecName: "config") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:51.582551 master-0 kubenswrapper[33867]: I0219 03:43:51.581224 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:51.583330 master-0 kubenswrapper[33867]: I0219 03:43:51.583185 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:51.590150 master-0 kubenswrapper[33867]: I0219 03:43:51.590083 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ce13545-41e9-40c6-9719-6aff7d61041d" (UID: "9ce13545-41e9-40c6-9719-6aff7d61041d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:43:51.608247 master-0 kubenswrapper[33867]: I0219 03:43:51.608174 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.608247 master-0 kubenswrapper[33867]: I0219 03:43:51.608228 33867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.608247 master-0 kubenswrapper[33867]: I0219 03:43:51.608241 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl4fw\" (UniqueName: \"kubernetes.io/projected/9ce13545-41e9-40c6-9719-6aff7d61041d-kube-api-access-fl4fw\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.608247 master-0 kubenswrapper[33867]: I0219 03:43:51.608267 33867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.608560 master-0 kubenswrapper[33867]: I0219 03:43:51.608283 33867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:51.608560 master-0 kubenswrapper[33867]: I0219 03:43:51.608291 33867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce13545-41e9-40c6-9719-6aff7d61041d-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:52.207112 master-0 kubenswrapper[33867]: I0219 03:43:52.207041 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" event={"ID":"9ce13545-41e9-40c6-9719-6aff7d61041d","Type":"ContainerDied","Data":"99305ba67e3599b68509f524af9e15bd1ad362d89e7606900cabb6805bf7b793"} Feb 19 03:43:52.207383 master-0 kubenswrapper[33867]: I0219 03:43:52.207042 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c88576cf-mrwrb" Feb 19 03:43:52.207383 master-0 kubenswrapper[33867]: I0219 03:43:52.207158 33867 scope.go:117] "RemoveContainer" containerID="3af8fae0acb961ada9ace29d2211091de753e4a86e10f0ea515b0f365b204645" Feb 19 03:43:52.238543 master-0 kubenswrapper[33867]: I0219 03:43:52.238492 33867 scope.go:117] "RemoveContainer" containerID="9d52ff981af2e54870ce5b3af090f415adfd0234976aa108dfb36b235caa1567" Feb 19 03:43:52.258305 master-0 kubenswrapper[33867]: I0219 03:43:52.258231 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:43:52.271321 master-0 kubenswrapper[33867]: I0219 03:43:52.271277 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9c88576cf-mrwrb"] Feb 19 03:43:52.634811 master-0 kubenswrapper[33867]: I0219 03:43:52.634759 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:52.741017 master-0 kubenswrapper[33867]: I0219 03:43:52.740920 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-865n9\" (UniqueName: \"kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9\") pod \"d72d9962-afbf-436d-9250-37b6ae7f252d\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " Feb 19 03:43:52.741301 master-0 kubenswrapper[33867]: I0219 03:43:52.741205 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data\") pod \"d72d9962-afbf-436d-9250-37b6ae7f252d\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " Feb 19 03:43:52.741368 master-0 kubenswrapper[33867]: I0219 03:43:52.741333 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts\") pod \"d72d9962-afbf-436d-9250-37b6ae7f252d\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " Feb 19 03:43:52.741517 master-0 kubenswrapper[33867]: I0219 03:43:52.741476 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") pod \"d72d9962-afbf-436d-9250-37b6ae7f252d\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " Feb 19 03:43:52.745109 master-0 kubenswrapper[33867]: I0219 03:43:52.744828 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9" (OuterVolumeSpecName: "kube-api-access-865n9") pod "d72d9962-afbf-436d-9250-37b6ae7f252d" (UID: "d72d9962-afbf-436d-9250-37b6ae7f252d"). InnerVolumeSpecName "kube-api-access-865n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:52.751229 master-0 kubenswrapper[33867]: I0219 03:43:52.751184 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts" (OuterVolumeSpecName: "scripts") pod "d72d9962-afbf-436d-9250-37b6ae7f252d" (UID: "d72d9962-afbf-436d-9250-37b6ae7f252d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:52.770978 master-0 kubenswrapper[33867]: E0219 03:43:52.770900 33867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle podName:d72d9962-afbf-436d-9250-37b6ae7f252d nodeName:}" failed. No retries permitted until 2026-02-19 03:43:53.270858259 +0000 UTC m=+1238.567528880 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle") pod "d72d9962-afbf-436d-9250-37b6ae7f252d" (UID: "d72d9962-afbf-436d-9250-37b6ae7f252d") : error deleting /var/lib/kubelet/pods/d72d9962-afbf-436d-9250-37b6ae7f252d/volume-subpaths: remove /var/lib/kubelet/pods/d72d9962-afbf-436d-9250-37b6ae7f252d/volume-subpaths: no such file or directory Feb 19 03:43:52.773213 master-0 kubenswrapper[33867]: I0219 03:43:52.773173 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data" (OuterVolumeSpecName: "config-data") pod "d72d9962-afbf-436d-9250-37b6ae7f252d" (UID: "d72d9962-afbf-436d-9250-37b6ae7f252d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:52.846955 master-0 kubenswrapper[33867]: I0219 03:43:52.846882 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-865n9\" (UniqueName: \"kubernetes.io/projected/d72d9962-afbf-436d-9250-37b6ae7f252d-kube-api-access-865n9\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:52.846955 master-0 kubenswrapper[33867]: I0219 03:43:52.846935 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:52.846955 master-0 kubenswrapper[33867]: I0219 03:43:52.846951 33867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:52.973420 master-0 kubenswrapper[33867]: I0219 03:43:52.973313 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" path="/var/lib/kubelet/pods/9ce13545-41e9-40c6-9719-6aff7d61041d/volumes" Feb 19 03:43:53.226949 master-0 kubenswrapper[33867]: I0219 03:43:53.226855 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhrf8" event={"ID":"d72d9962-afbf-436d-9250-37b6ae7f252d","Type":"ContainerDied","Data":"c1d0d745278071699bbd9cf3835cc5cdf4ce3855757b45500baf71510d3bc1ab"} Feb 19 03:43:53.226949 master-0 kubenswrapper[33867]: I0219 03:43:53.226908 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1d0d745278071699bbd9cf3835cc5cdf4ce3855757b45500baf71510d3bc1ab" Feb 19 03:43:53.227375 master-0 kubenswrapper[33867]: I0219 03:43:53.226978 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhrf8" Feb 19 03:43:53.357552 master-0 kubenswrapper[33867]: I0219 03:43:53.357469 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") pod \"d72d9962-afbf-436d-9250-37b6ae7f252d\" (UID: \"d72d9962-afbf-436d-9250-37b6ae7f252d\") " Feb 19 03:43:53.360439 master-0 kubenswrapper[33867]: I0219 03:43:53.360387 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d72d9962-afbf-436d-9250-37b6ae7f252d" (UID: "d72d9962-afbf-436d-9250-37b6ae7f252d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:53.441289 master-0 kubenswrapper[33867]: I0219 03:43:53.441194 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:53.441552 master-0 kubenswrapper[33867]: I0219 03:43:53.441509 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerName="nova-scheduler-scheduler" containerID="cri-o://76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" gracePeriod=30 Feb 19 03:43:53.458905 master-0 kubenswrapper[33867]: I0219 03:43:53.458822 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:53.459170 master-0 kubenswrapper[33867]: I0219 03:43:53.459140 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-log" containerID="cri-o://2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" gracePeriod=30 Feb 19 03:43:53.459239 master-0 kubenswrapper[33867]: I0219 03:43:53.459188 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-api" containerID="cri-o://d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" gracePeriod=30 Feb 19 03:43:53.461186 master-0 kubenswrapper[33867]: I0219 03:43:53.460907 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72d9962-afbf-436d-9250-37b6ae7f252d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:53.473827 master-0 kubenswrapper[33867]: I0219 03:43:53.473768 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:53.474088 master-0 kubenswrapper[33867]: I0219 03:43:53.474034 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" containerID="cri-o://93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0" gracePeriod=30 Feb 19 03:43:53.474163 master-0 kubenswrapper[33867]: I0219 03:43:53.474075 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" containerID="cri-o://2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a" gracePeriod=30 Feb 19 03:43:54.180192 master-0 kubenswrapper[33867]: I0219 03:43:54.180134 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.267020 33867 generic.go:334] "Generic (PLEG): container finished" podID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerID="93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0" exitCode=143 Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.267112 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerDied","Data":"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0"} Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271060 33867 generic.go:334] "Generic (PLEG): container finished" podID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerID="d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" exitCode=0 Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271117 33867 generic.go:334] "Generic (PLEG): container finished" podID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerID="2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" exitCode=143 Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271142 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerDied","Data":"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3"} Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271210 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerDied","Data":"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a"} Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271222 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271277 33867 scope.go:117] "RemoveContainer" containerID="d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" Feb 19 03:43:54.276070 master-0 kubenswrapper[33867]: I0219 03:43:54.271227 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc6852d2-c313-4b94-a81f-45d2a3f5921d","Type":"ContainerDied","Data":"3eb4a90b33991ee6f44c3721c7c29ecc6aa41ff7df9c25d8e356b3dc28b7d6d6"} Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284295 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284441 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284511 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284566 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284601 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfztd\" (UniqueName: \"kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.285185 master-0 kubenswrapper[33867]: I0219 03:43:54.284723 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data\") pod \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\" (UID: \"fc6852d2-c313-4b94-a81f-45d2a3f5921d\") " Feb 19 03:43:54.287394 master-0 kubenswrapper[33867]: I0219 03:43:54.285714 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs" (OuterVolumeSpecName: "logs") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:54.288589 master-0 kubenswrapper[33867]: I0219 03:43:54.288543 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc6852d2-c313-4b94-a81f-45d2a3f5921d-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.289637 master-0 kubenswrapper[33867]: I0219 03:43:54.289570 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd" (OuterVolumeSpecName: "kube-api-access-rfztd") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "kube-api-access-rfztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:54.315190 master-0 kubenswrapper[33867]: I0219 03:43:54.315141 33867 scope.go:117] "RemoveContainer" containerID="2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" Feb 19 03:43:54.315894 master-0 kubenswrapper[33867]: I0219 03:43:54.315787 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:54.323932 master-0 kubenswrapper[33867]: I0219 03:43:54.323883 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data" (OuterVolumeSpecName: "config-data") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:54.349699 master-0 kubenswrapper[33867]: I0219 03:43:54.349369 33867 scope.go:117] "RemoveContainer" containerID="d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" Feb 19 03:43:54.349897 master-0 kubenswrapper[33867]: E0219 03:43:54.349847 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3\": container with ID starting with d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3 not found: ID does not exist" containerID="d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" Feb 19 03:43:54.349976 master-0 kubenswrapper[33867]: I0219 03:43:54.349904 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3"} err="failed to get container status \"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3\": rpc error: code = NotFound desc = could not find container \"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3\": container with ID starting with d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3 not found: ID does not exist" Feb 19 03:43:54.349976 master-0 kubenswrapper[33867]: I0219 03:43:54.349940 33867 scope.go:117] "RemoveContainer" containerID="2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" Feb 19 03:43:54.350415 master-0 kubenswrapper[33867]: E0219 03:43:54.350380 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a\": container with ID starting with 2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a not found: ID does not exist" containerID="2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" Feb 19 03:43:54.350529 master-0 kubenswrapper[33867]: I0219 03:43:54.350411 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a"} err="failed to get container status \"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a\": rpc error: code = NotFound desc = could not find container \"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a\": container with ID starting with 2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a not found: ID does not exist" Feb 19 03:43:54.350529 master-0 kubenswrapper[33867]: I0219 03:43:54.350437 33867 scope.go:117] "RemoveContainer" containerID="d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3" Feb 19 03:43:54.350877 master-0 kubenswrapper[33867]: I0219 03:43:54.350835 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3"} err="failed to get container status \"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3\": rpc error: code = NotFound desc = could not find container \"d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3\": container with ID starting with d8b5894cbc3057fa2ccca6e5f10b038a92db75a366e7136657df31f1a47d01d3 not found: ID does not exist" Feb 19 03:43:54.350877 master-0 kubenswrapper[33867]: I0219 03:43:54.350862 33867 scope.go:117] "RemoveContainer" containerID="2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a" Feb 19 03:43:54.351102 master-0 kubenswrapper[33867]: I0219 03:43:54.351073 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a"} err="failed to get container status \"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a\": rpc error: code = NotFound desc = could not find container \"2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a\": container with ID starting with 2a2d5459853ceeb89da8dadcea412cfe9cbebfee60c1700ee5d4594cc47cd40a not found: ID does not exist" Feb 19 03:43:54.373618 master-0 kubenswrapper[33867]: I0219 03:43:54.373553 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:54.384166 master-0 kubenswrapper[33867]: I0219 03:43:54.384010 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fc6852d2-c313-4b94-a81f-45d2a3f5921d" (UID: "fc6852d2-c313-4b94-a81f-45d2a3f5921d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:54.391081 master-0 kubenswrapper[33867]: I0219 03:43:54.391003 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.391081 master-0 kubenswrapper[33867]: I0219 03:43:54.391070 33867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.391081 master-0 kubenswrapper[33867]: I0219 03:43:54.391081 33867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.391369 master-0 kubenswrapper[33867]: I0219 03:43:54.391097 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfztd\" (UniqueName: \"kubernetes.io/projected/fc6852d2-c313-4b94-a81f-45d2a3f5921d-kube-api-access-rfztd\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.391369 master-0 kubenswrapper[33867]: I0219 03:43:54.391113 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc6852d2-c313-4b94-a81f-45d2a3f5921d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:54.684090 master-0 kubenswrapper[33867]: I0219 03:43:54.684013 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:54.702790 master-0 kubenswrapper[33867]: I0219 03:43:54.702718 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:54.737012 master-0 kubenswrapper[33867]: I0219 03:43:54.736932 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:54.737679 master-0 kubenswrapper[33867]: E0219 03:43:54.737645 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="init" Feb 19 03:43:54.737679 master-0 kubenswrapper[33867]: I0219 03:43:54.737671 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="init" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: E0219 03:43:54.737689 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2540117-66a4-4bde-80ce-e1c15c51b076" containerName="nova-manage" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: I0219 03:43:54.737697 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2540117-66a4-4bde-80ce-e1c15c51b076" containerName="nova-manage" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: E0219 03:43:54.737733 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72d9962-afbf-436d-9250-37b6ae7f252d" containerName="nova-manage" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: I0219 03:43:54.737740 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72d9962-afbf-436d-9250-37b6ae7f252d" containerName="nova-manage" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: E0219 03:43:54.737753 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="dnsmasq-dns" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: I0219 03:43:54.737760 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="dnsmasq-dns" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: E0219 03:43:54.737783 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-log" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: I0219 03:43:54.737788 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-log" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: E0219 03:43:54.737811 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-api" Feb 19 03:43:54.737852 master-0 kubenswrapper[33867]: I0219 03:43:54.737816 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-api" Feb 19 03:43:54.738544 master-0 kubenswrapper[33867]: I0219 03:43:54.738085 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce13545-41e9-40c6-9719-6aff7d61041d" containerName="dnsmasq-dns" Feb 19 03:43:54.738544 master-0 kubenswrapper[33867]: I0219 03:43:54.738114 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-api" Feb 19 03:43:54.738544 master-0 kubenswrapper[33867]: I0219 03:43:54.738159 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" containerName="nova-api-log" Feb 19 03:43:54.738544 master-0 kubenswrapper[33867]: I0219 03:43:54.738183 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72d9962-afbf-436d-9250-37b6ae7f252d" containerName="nova-manage" Feb 19 03:43:54.738544 master-0 kubenswrapper[33867]: I0219 03:43:54.738199 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2540117-66a4-4bde-80ce-e1c15c51b076" containerName="nova-manage" Feb 19 03:43:54.739802 master-0 kubenswrapper[33867]: I0219 03:43:54.739767 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:54.744858 master-0 kubenswrapper[33867]: I0219 03:43:54.744803 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 03:43:54.745142 master-0 kubenswrapper[33867]: I0219 03:43:54.744957 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 03:43:54.746421 master-0 kubenswrapper[33867]: I0219 03:43:54.745235 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 03:43:54.754285 master-0 kubenswrapper[33867]: I0219 03:43:54.754198 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:54.813449 master-0 kubenswrapper[33867]: I0219 03:43:54.813316 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5c9b\" (UniqueName: \"kubernetes.io/projected/40213efd-1773-4c03-a61c-869bd88ccd6f-kube-api-access-n5c9b\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.813449 master-0 kubenswrapper[33867]: I0219 03:43:54.813440 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.813723 master-0 kubenswrapper[33867]: I0219 03:43:54.813469 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-config-data\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.813723 master-0 kubenswrapper[33867]: I0219 03:43:54.813534 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.813723 master-0 kubenswrapper[33867]: I0219 03:43:54.813560 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40213efd-1773-4c03-a61c-869bd88ccd6f-logs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.813723 master-0 kubenswrapper[33867]: I0219 03:43:54.813687 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916280 master-0 kubenswrapper[33867]: I0219 03:43:54.916194 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916280 master-0 kubenswrapper[33867]: I0219 03:43:54.916270 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-config-data\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916607 master-0 kubenswrapper[33867]: I0219 03:43:54.916457 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916607 master-0 kubenswrapper[33867]: I0219 03:43:54.916491 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40213efd-1773-4c03-a61c-869bd88ccd6f-logs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916607 master-0 kubenswrapper[33867]: I0219 03:43:54.916563 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.916714 master-0 kubenswrapper[33867]: I0219 03:43:54.916692 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5c9b\" (UniqueName: \"kubernetes.io/projected/40213efd-1773-4c03-a61c-869bd88ccd6f-kube-api-access-n5c9b\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.917521 master-0 kubenswrapper[33867]: I0219 03:43:54.917478 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40213efd-1773-4c03-a61c-869bd88ccd6f-logs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.920325 master-0 kubenswrapper[33867]: I0219 03:43:54.919985 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-config-data\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.920408 master-0 kubenswrapper[33867]: I0219 03:43:54.920335 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.921896 master-0 kubenswrapper[33867]: I0219 03:43:54.921656 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.931281 master-0 kubenswrapper[33867]: I0219 03:43:54.923282 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40213efd-1773-4c03-a61c-869bd88ccd6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.936278 master-0 kubenswrapper[33867]: I0219 03:43:54.936112 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5c9b\" (UniqueName: \"kubernetes.io/projected/40213efd-1773-4c03-a61c-869bd88ccd6f-kube-api-access-n5c9b\") pod \"nova-api-0\" (UID: \"40213efd-1773-4c03-a61c-869bd88ccd6f\") " pod="openstack/nova-api-0" Feb 19 03:43:54.998542 master-0 kubenswrapper[33867]: I0219 03:43:54.998486 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6852d2-c313-4b94-a81f-45d2a3f5921d" path="/var/lib/kubelet/pods/fc6852d2-c313-4b94-a81f-45d2a3f5921d/volumes" Feb 19 03:43:55.088628 master-0 kubenswrapper[33867]: E0219 03:43:55.088545 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:55.091316 master-0 kubenswrapper[33867]: E0219 03:43:55.091286 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:55.093152 master-0 kubenswrapper[33867]: E0219 03:43:55.093083 33867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 03:43:55.093229 master-0 kubenswrapper[33867]: E0219 03:43:55.093174 33867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerName="nova-scheduler-scheduler" Feb 19 03:43:55.101795 master-0 kubenswrapper[33867]: I0219 03:43:55.101752 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 03:43:55.584667 master-0 kubenswrapper[33867]: I0219 03:43:55.584575 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 03:43:55.593593 master-0 kubenswrapper[33867]: W0219 03:43:55.593524 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40213efd_1773_4c03_a61c_869bd88ccd6f.slice/crio-8792e8f62fdb260d37f178809ab57c499a0cf07e3ad15d59ebaf7b65635d6554 WatchSource:0}: Error finding container 8792e8f62fdb260d37f178809ab57c499a0cf07e3ad15d59ebaf7b65635d6554: Status 404 returned error can't find the container with id 8792e8f62fdb260d37f178809ab57c499a0cf07e3ad15d59ebaf7b65635d6554 Feb 19 03:43:56.306192 master-0 kubenswrapper[33867]: I0219 03:43:56.306114 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"40213efd-1773-4c03-a61c-869bd88ccd6f","Type":"ContainerStarted","Data":"8ea4fd9c5171f68dcfc49dfa243ae7695b98d7ca3dfe4a29e074b8b912a42858"} Feb 19 03:43:56.306192 master-0 kubenswrapper[33867]: I0219 03:43:56.306186 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"40213efd-1773-4c03-a61c-869bd88ccd6f","Type":"ContainerStarted","Data":"6c95ca25c82f497a734a4319157dae5c0801323203ba5a3e51decfa645d8fe06"} Feb 19 03:43:56.306192 master-0 kubenswrapper[33867]: I0219 03:43:56.306197 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"40213efd-1773-4c03-a61c-869bd88ccd6f","Type":"ContainerStarted","Data":"8792e8f62fdb260d37f178809ab57c499a0cf07e3ad15d59ebaf7b65635d6554"} Feb 19 03:43:56.347013 master-0 kubenswrapper[33867]: I0219 03:43:56.346818 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.346786305 podStartE2EDuration="2.346786305s" podCreationTimestamp="2026-02-19 03:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:56.328497767 +0000 UTC m=+1241.625168458" watchObservedRunningTime="2026-02-19 03:43:56.346786305 +0000 UTC m=+1241.643456926" Feb 19 03:43:56.599450 master-0 kubenswrapper[33867]: I0219 03:43:56.599277 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": read tcp 10.128.0.2:56592->10.128.1.16:8775: read: connection reset by peer" Feb 19 03:43:56.599450 master-0 kubenswrapper[33867]: I0219 03:43:56.599369 33867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": read tcp 10.128.0.2:56590->10.128.1.16:8775: read: connection reset by peer" Feb 19 03:43:57.153848 master-0 kubenswrapper[33867]: I0219 03:43:57.153782 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:57.175567 master-0 kubenswrapper[33867]: I0219 03:43:57.175465 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29njz\" (UniqueName: \"kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz\") pod \"32b71a14-a345-4919-8c5a-c5bf41644a29\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " Feb 19 03:43:57.175567 master-0 kubenswrapper[33867]: I0219 03:43:57.175533 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data\") pod \"32b71a14-a345-4919-8c5a-c5bf41644a29\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " Feb 19 03:43:57.175897 master-0 kubenswrapper[33867]: I0219 03:43:57.175762 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs\") pod \"32b71a14-a345-4919-8c5a-c5bf41644a29\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " Feb 19 03:43:57.175897 master-0 kubenswrapper[33867]: I0219 03:43:57.175848 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle\") pod \"32b71a14-a345-4919-8c5a-c5bf41644a29\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " Feb 19 03:43:57.175990 master-0 kubenswrapper[33867]: I0219 03:43:57.175976 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs\") pod \"32b71a14-a345-4919-8c5a-c5bf41644a29\" (UID: \"32b71a14-a345-4919-8c5a-c5bf41644a29\") " Feb 19 03:43:57.176295 master-0 kubenswrapper[33867]: I0219 03:43:57.176249 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs" (OuterVolumeSpecName: "logs") pod "32b71a14-a345-4919-8c5a-c5bf41644a29" (UID: "32b71a14-a345-4919-8c5a-c5bf41644a29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 03:43:57.176700 master-0 kubenswrapper[33867]: I0219 03:43:57.176633 33867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b71a14-a345-4919-8c5a-c5bf41644a29-logs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:57.193404 master-0 kubenswrapper[33867]: I0219 03:43:57.193010 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz" (OuterVolumeSpecName: "kube-api-access-29njz") pod "32b71a14-a345-4919-8c5a-c5bf41644a29" (UID: "32b71a14-a345-4919-8c5a-c5bf41644a29"). InnerVolumeSpecName "kube-api-access-29njz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:57.229326 master-0 kubenswrapper[33867]: I0219 03:43:57.228412 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data" (OuterVolumeSpecName: "config-data") pod "32b71a14-a345-4919-8c5a-c5bf41644a29" (UID: "32b71a14-a345-4919-8c5a-c5bf41644a29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:57.259824 master-0 kubenswrapper[33867]: I0219 03:43:57.259731 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32b71a14-a345-4919-8c5a-c5bf41644a29" (UID: "32b71a14-a345-4919-8c5a-c5bf41644a29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:57.283408 master-0 kubenswrapper[33867]: I0219 03:43:57.279388 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29njz\" (UniqueName: \"kubernetes.io/projected/32b71a14-a345-4919-8c5a-c5bf41644a29-kube-api-access-29njz\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:57.283408 master-0 kubenswrapper[33867]: I0219 03:43:57.279434 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:57.283408 master-0 kubenswrapper[33867]: I0219 03:43:57.279447 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:57.313591 master-0 kubenswrapper[33867]: I0219 03:43:57.307111 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "32b71a14-a345-4919-8c5a-c5bf41644a29" (UID: "32b71a14-a345-4919-8c5a-c5bf41644a29"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:57.409419 master-0 kubenswrapper[33867]: I0219 03:43:57.405728 33867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b71a14-a345-4919-8c5a-c5bf41644a29-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:57.471420 master-0 kubenswrapper[33867]: I0219 03:43:57.468573 33867 generic.go:334] "Generic (PLEG): container finished" podID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerID="2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a" exitCode=0 Feb 19 03:43:57.471420 master-0 kubenswrapper[33867]: I0219 03:43:57.469895 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:57.471752 master-0 kubenswrapper[33867]: I0219 03:43:57.471494 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerDied","Data":"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a"} Feb 19 03:43:57.471752 master-0 kubenswrapper[33867]: I0219 03:43:57.471565 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32b71a14-a345-4919-8c5a-c5bf41644a29","Type":"ContainerDied","Data":"8b1afd5e4732eea297b60dc4bcb0608d2cbf9fedaca33db66eef7b71cedd4b97"} Feb 19 03:43:57.471752 master-0 kubenswrapper[33867]: I0219 03:43:57.471588 33867 scope.go:117] "RemoveContainer" containerID="2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a" Feb 19 03:43:57.549232 master-0 kubenswrapper[33867]: I0219 03:43:57.549171 33867 scope.go:117] "RemoveContainer" containerID="93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0" Feb 19 03:43:57.579433 master-0 kubenswrapper[33867]: I0219 03:43:57.579367 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:57.585709 master-0 kubenswrapper[33867]: I0219 03:43:57.585655 33867 scope.go:117] "RemoveContainer" containerID="2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a" Feb 19 03:43:57.586236 master-0 kubenswrapper[33867]: E0219 03:43:57.586198 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a\": container with ID starting with 2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a not found: ID does not exist" containerID="2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a" Feb 19 03:43:57.586378 master-0 kubenswrapper[33867]: I0219 03:43:57.586240 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a"} err="failed to get container status \"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a\": rpc error: code = NotFound desc = could not find container \"2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a\": container with ID starting with 2efa8a04b6dc5c311efe835421e2ebf67b62c255ba45a978850b53a67d6f161a not found: ID does not exist" Feb 19 03:43:57.586378 master-0 kubenswrapper[33867]: I0219 03:43:57.586283 33867 scope.go:117] "RemoveContainer" containerID="93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0" Feb 19 03:43:57.586698 master-0 kubenswrapper[33867]: E0219 03:43:57.586660 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0\": container with ID starting with 93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0 not found: ID does not exist" containerID="93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0" Feb 19 03:43:57.586744 master-0 kubenswrapper[33867]: I0219 03:43:57.586690 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0"} err="failed to get container status \"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0\": rpc error: code = NotFound desc = could not find container \"93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0\": container with ID starting with 93dce419128c4690c5034acca67c2ed6dfbe5f7220688d48c45ef4cc87beecb0 not found: ID does not exist" Feb 19 03:43:57.594705 master-0 kubenswrapper[33867]: I0219 03:43:57.594336 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:57.617372 master-0 kubenswrapper[33867]: I0219 03:43:57.617294 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:57.618010 master-0 kubenswrapper[33867]: E0219 03:43:57.617877 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" Feb 19 03:43:57.618010 master-0 kubenswrapper[33867]: I0219 03:43:57.617899 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" Feb 19 03:43:57.618010 master-0 kubenswrapper[33867]: E0219 03:43:57.617982 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" Feb 19 03:43:57.618010 master-0 kubenswrapper[33867]: I0219 03:43:57.617993 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" Feb 19 03:43:57.618959 master-0 kubenswrapper[33867]: I0219 03:43:57.618927 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-metadata" Feb 19 03:43:57.619235 master-0 kubenswrapper[33867]: I0219 03:43:57.618965 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" containerName="nova-metadata-log" Feb 19 03:43:57.620735 master-0 kubenswrapper[33867]: I0219 03:43:57.620698 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:57.632373 master-0 kubenswrapper[33867]: I0219 03:43:57.631820 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 03:43:57.632373 master-0 kubenswrapper[33867]: I0219 03:43:57.632093 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 03:43:57.665511 master-0 kubenswrapper[33867]: I0219 03:43:57.665337 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:57.716382 master-0 kubenswrapper[33867]: I0219 03:43:57.716301 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8c008b-b321-46e8-9c93-6793dd4e084c-logs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.716382 master-0 kubenswrapper[33867]: I0219 03:43:57.716376 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.716687 master-0 kubenswrapper[33867]: I0219 03:43:57.716419 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.716687 master-0 kubenswrapper[33867]: I0219 03:43:57.716542 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-config-data\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.716940 master-0 kubenswrapper[33867]: I0219 03:43:57.716904 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5msdv\" (UniqueName: \"kubernetes.io/projected/fd8c008b-b321-46e8-9c93-6793dd4e084c-kube-api-access-5msdv\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.819420 master-0 kubenswrapper[33867]: I0219 03:43:57.819334 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5msdv\" (UniqueName: \"kubernetes.io/projected/fd8c008b-b321-46e8-9c93-6793dd4e084c-kube-api-access-5msdv\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.819710 master-0 kubenswrapper[33867]: I0219 03:43:57.819587 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8c008b-b321-46e8-9c93-6793dd4e084c-logs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.819710 master-0 kubenswrapper[33867]: I0219 03:43:57.819639 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.819710 master-0 kubenswrapper[33867]: I0219 03:43:57.819683 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.820075 master-0 kubenswrapper[33867]: I0219 03:43:57.819971 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-config-data\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.820431 master-0 kubenswrapper[33867]: I0219 03:43:57.820389 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8c008b-b321-46e8-9c93-6793dd4e084c-logs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.824883 master-0 kubenswrapper[33867]: I0219 03:43:57.824191 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-config-data\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.824883 master-0 kubenswrapper[33867]: I0219 03:43:57.824283 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.824883 master-0 kubenswrapper[33867]: I0219 03:43:57.824763 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8c008b-b321-46e8-9c93-6793dd4e084c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.837596 master-0 kubenswrapper[33867]: I0219 03:43:57.837537 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5msdv\" (UniqueName: \"kubernetes.io/projected/fd8c008b-b321-46e8-9c93-6793dd4e084c-kube-api-access-5msdv\") pod \"nova-metadata-0\" (UID: \"fd8c008b-b321-46e8-9c93-6793dd4e084c\") " pod="openstack/nova-metadata-0" Feb 19 03:43:57.961888 master-0 kubenswrapper[33867]: I0219 03:43:57.961750 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 03:43:58.499081 master-0 kubenswrapper[33867]: I0219 03:43:58.499011 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 03:43:58.974844 master-0 kubenswrapper[33867]: I0219 03:43:58.974761 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b71a14-a345-4919-8c5a-c5bf41644a29" path="/var/lib/kubelet/pods/32b71a14-a345-4919-8c5a-c5bf41644a29/volumes" Feb 19 03:43:59.299377 master-0 kubenswrapper[33867]: I0219 03:43:59.299318 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:59.386292 master-0 kubenswrapper[33867]: I0219 03:43:59.384029 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcc25\" (UniqueName: \"kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25\") pod \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " Feb 19 03:43:59.386292 master-0 kubenswrapper[33867]: I0219 03:43:59.384151 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data\") pod \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " Feb 19 03:43:59.386292 master-0 kubenswrapper[33867]: I0219 03:43:59.384236 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle\") pod \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\" (UID: \"6319ca32-f7b0-458a-8fe3-137c7aa4254a\") " Feb 19 03:43:59.400289 master-0 kubenswrapper[33867]: I0219 03:43:59.392567 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25" (OuterVolumeSpecName: "kube-api-access-jcc25") pod "6319ca32-f7b0-458a-8fe3-137c7aa4254a" (UID: "6319ca32-f7b0-458a-8fe3-137c7aa4254a"). InnerVolumeSpecName "kube-api-access-jcc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:43:59.417551 master-0 kubenswrapper[33867]: I0219 03:43:59.417489 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6319ca32-f7b0-458a-8fe3-137c7aa4254a" (UID: "6319ca32-f7b0-458a-8fe3-137c7aa4254a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:59.420874 master-0 kubenswrapper[33867]: I0219 03:43:59.420817 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data" (OuterVolumeSpecName: "config-data") pod "6319ca32-f7b0-458a-8fe3-137c7aa4254a" (UID: "6319ca32-f7b0-458a-8fe3-137c7aa4254a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:43:59.489698 master-0 kubenswrapper[33867]: I0219 03:43:59.489625 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:59.489698 master-0 kubenswrapper[33867]: I0219 03:43:59.489710 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcc25\" (UniqueName: \"kubernetes.io/projected/6319ca32-f7b0-458a-8fe3-137c7aa4254a-kube-api-access-jcc25\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:59.490003 master-0 kubenswrapper[33867]: I0219 03:43:59.489732 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6319ca32-f7b0-458a-8fe3-137c7aa4254a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 03:43:59.515428 master-0 kubenswrapper[33867]: I0219 03:43:59.515370 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd8c008b-b321-46e8-9c93-6793dd4e084c","Type":"ContainerStarted","Data":"cb76e02ae5c495e7a988a86ffc85a25391087b808b36601ece6df8600dac0a55"} Feb 19 03:43:59.515428 master-0 kubenswrapper[33867]: I0219 03:43:59.515441 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd8c008b-b321-46e8-9c93-6793dd4e084c","Type":"ContainerStarted","Data":"a17fe6f76904bafb7261fa99dc289c83866ea70fc93a0e772509527977c5792b"} Feb 19 03:43:59.515708 master-0 kubenswrapper[33867]: I0219 03:43:59.515453 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd8c008b-b321-46e8-9c93-6793dd4e084c","Type":"ContainerStarted","Data":"5e64d158d0b6bc54f4a41eddce9f59e18d449be624e49178d1463e850694319b"} Feb 19 03:43:59.519312 master-0 kubenswrapper[33867]: I0219 03:43:59.519239 33867 generic.go:334] "Generic (PLEG): container finished" podID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" exitCode=0 Feb 19 03:43:59.519312 master-0 kubenswrapper[33867]: I0219 03:43:59.519289 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:59.519458 master-0 kubenswrapper[33867]: I0219 03:43:59.519310 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6319ca32-f7b0-458a-8fe3-137c7aa4254a","Type":"ContainerDied","Data":"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085"} Feb 19 03:43:59.519458 master-0 kubenswrapper[33867]: I0219 03:43:59.519390 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6319ca32-f7b0-458a-8fe3-137c7aa4254a","Type":"ContainerDied","Data":"a5f13fa64e53eb49f77ead02e52fa4811da0bdf008204b3a83595054258ffc25"} Feb 19 03:43:59.519458 master-0 kubenswrapper[33867]: I0219 03:43:59.519414 33867 scope.go:117] "RemoveContainer" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" Feb 19 03:43:59.544806 master-0 kubenswrapper[33867]: I0219 03:43:59.544700 33867 scope.go:117] "RemoveContainer" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" Feb 19 03:43:59.545879 master-0 kubenswrapper[33867]: E0219 03:43:59.545828 33867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085\": container with ID starting with 76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085 not found: ID does not exist" containerID="76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085" Feb 19 03:43:59.545969 master-0 kubenswrapper[33867]: I0219 03:43:59.545868 33867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085"} err="failed to get container status \"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085\": rpc error: code = NotFound desc = could not find container \"76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085\": container with ID starting with 76ac45f3abec6c65a83e2c0e13a3158196ba8bd25ad024ba6d9e13cc2b64b085 not found: ID does not exist" Feb 19 03:43:59.554116 master-0 kubenswrapper[33867]: I0219 03:43:59.553990 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.553967308 podStartE2EDuration="2.553967308s" podCreationTimestamp="2026-02-19 03:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:43:59.540495696 +0000 UTC m=+1244.837166307" watchObservedRunningTime="2026-02-19 03:43:59.553967308 +0000 UTC m=+1244.850637919" Feb 19 03:43:59.590061 master-0 kubenswrapper[33867]: I0219 03:43:59.589992 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:59.607657 master-0 kubenswrapper[33867]: I0219 03:43:59.607590 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:59.630426 master-0 kubenswrapper[33867]: I0219 03:43:59.630336 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:59.631244 master-0 kubenswrapper[33867]: E0219 03:43:59.631063 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerName="nova-scheduler-scheduler" Feb 19 03:43:59.631244 master-0 kubenswrapper[33867]: I0219 03:43:59.631088 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerName="nova-scheduler-scheduler" Feb 19 03:43:59.631524 master-0 kubenswrapper[33867]: I0219 03:43:59.631499 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" containerName="nova-scheduler-scheduler" Feb 19 03:43:59.632465 master-0 kubenswrapper[33867]: I0219 03:43:59.632425 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:43:59.635232 master-0 kubenswrapper[33867]: I0219 03:43:59.635194 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 03:43:59.644507 master-0 kubenswrapper[33867]: I0219 03:43:59.644358 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:43:59.694278 master-0 kubenswrapper[33867]: I0219 03:43:59.694198 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-config-data\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.694582 master-0 kubenswrapper[33867]: I0219 03:43:59.694558 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.694794 master-0 kubenswrapper[33867]: I0219 03:43:59.694770 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccnd\" (UniqueName: \"kubernetes.io/projected/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-kube-api-access-fccnd\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.797039 master-0 kubenswrapper[33867]: I0219 03:43:59.796955 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccnd\" (UniqueName: \"kubernetes.io/projected/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-kube-api-access-fccnd\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.797374 master-0 kubenswrapper[33867]: I0219 03:43:59.797199 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.797374 master-0 kubenswrapper[33867]: I0219 03:43:59.797235 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-config-data\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.800859 master-0 kubenswrapper[33867]: I0219 03:43:59.800798 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-config-data\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.801097 master-0 kubenswrapper[33867]: I0219 03:43:59.801047 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.812614 master-0 kubenswrapper[33867]: I0219 03:43:59.812560 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccnd\" (UniqueName: \"kubernetes.io/projected/c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd-kube-api-access-fccnd\") pod \"nova-scheduler-0\" (UID: \"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd\") " pod="openstack/nova-scheduler-0" Feb 19 03:43:59.955683 master-0 kubenswrapper[33867]: I0219 03:43:59.955545 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 03:44:00.417402 master-0 kubenswrapper[33867]: I0219 03:44:00.417344 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 03:44:00.419541 master-0 kubenswrapper[33867]: W0219 03:44:00.419498 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c61cb6_e0c9_4f3a_96e4_b220c4998ddd.slice/crio-587769c893af301c5f21c8cedf1ea4aa393893611c00023e03e8a3f29c676879 WatchSource:0}: Error finding container 587769c893af301c5f21c8cedf1ea4aa393893611c00023e03e8a3f29c676879: Status 404 returned error can't find the container with id 587769c893af301c5f21c8cedf1ea4aa393893611c00023e03e8a3f29c676879 Feb 19 03:44:00.534649 master-0 kubenswrapper[33867]: I0219 03:44:00.534573 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd","Type":"ContainerStarted","Data":"587769c893af301c5f21c8cedf1ea4aa393893611c00023e03e8a3f29c676879"} Feb 19 03:44:00.982893 master-0 kubenswrapper[33867]: I0219 03:44:00.982751 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6319ca32-f7b0-458a-8fe3-137c7aa4254a" path="/var/lib/kubelet/pods/6319ca32-f7b0-458a-8fe3-137c7aa4254a/volumes" Feb 19 03:44:01.551920 master-0 kubenswrapper[33867]: I0219 03:44:01.551851 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd","Type":"ContainerStarted","Data":"e1dd059ca2902dfefc9870b47222ace7e5691f283ab1e3f96e9a7fd30a7efdf2"} Feb 19 03:44:01.591761 master-0 kubenswrapper[33867]: I0219 03:44:01.591621 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.59158683 podStartE2EDuration="2.59158683s" podCreationTimestamp="2026-02-19 03:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:44:01.573895679 +0000 UTC m=+1246.870566290" watchObservedRunningTime="2026-02-19 03:44:01.59158683 +0000 UTC m=+1246.888257451" Feb 19 03:44:02.972450 master-0 kubenswrapper[33867]: I0219 03:44:02.972296 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:44:02.972450 master-0 kubenswrapper[33867]: I0219 03:44:02.972354 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 03:44:04.971505 master-0 kubenswrapper[33867]: I0219 03:44:04.971409 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 03:44:05.103409 master-0 kubenswrapper[33867]: I0219 03:44:05.103325 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:44:05.103409 master-0 kubenswrapper[33867]: I0219 03:44:05.103397 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 03:44:06.120642 master-0 kubenswrapper[33867]: I0219 03:44:06.120568 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="40213efd-1773-4c03-a61c-869bd88ccd6f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.24:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:44:06.121217 master-0 kubenswrapper[33867]: I0219 03:44:06.120583 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="40213efd-1773-4c03-a61c-869bd88ccd6f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.24:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:44:07.961979 master-0 kubenswrapper[33867]: I0219 03:44:07.961906 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 03:44:07.961979 master-0 kubenswrapper[33867]: I0219 03:44:07.961965 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 03:44:08.974547 master-0 kubenswrapper[33867]: I0219 03:44:08.974457 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd8c008b-b321-46e8-9c93-6793dd4e084c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.25:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:44:08.975145 master-0 kubenswrapper[33867]: I0219 03:44:08.974472 33867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd8c008b-b321-46e8-9c93-6793dd4e084c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.25:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 03:44:09.956071 master-0 kubenswrapper[33867]: I0219 03:44:09.956014 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 03:44:09.991793 master-0 kubenswrapper[33867]: I0219 03:44:09.991719 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 03:44:10.725993 master-0 kubenswrapper[33867]: I0219 03:44:10.725929 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 03:44:15.110572 master-0 kubenswrapper[33867]: I0219 03:44:15.110479 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 03:44:15.110572 master-0 kubenswrapper[33867]: I0219 03:44:15.110586 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 03:44:15.111636 master-0 kubenswrapper[33867]: I0219 03:44:15.111087 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 03:44:15.111636 master-0 kubenswrapper[33867]: I0219 03:44:15.111146 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 03:44:15.116807 master-0 kubenswrapper[33867]: I0219 03:44:15.116753 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 03:44:15.117568 master-0 kubenswrapper[33867]: I0219 03:44:15.117540 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 03:44:17.971430 master-0 kubenswrapper[33867]: I0219 03:44:17.971364 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 03:44:17.978735 master-0 kubenswrapper[33867]: I0219 03:44:17.978645 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 03:44:17.978995 master-0 kubenswrapper[33867]: I0219 03:44:17.978833 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 03:44:18.792868 master-0 kubenswrapper[33867]: I0219 03:44:18.792789 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 03:44:45.722206 master-0 kubenswrapper[33867]: I0219 03:44:45.722118 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:44:45.722964 master-0 kubenswrapper[33867]: I0219 03:44:45.722497 33867 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerName="sushy-emulator" containerID="cri-o://c9cc25a0d7ddccc531061c4d90bb3f93027fe259f1703cefb032858b876e74ff" gracePeriod=30 Feb 19 03:44:46.181786 master-0 kubenswrapper[33867]: I0219 03:44:46.181714 33867 generic.go:334] "Generic (PLEG): container finished" podID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerID="c9cc25a0d7ddccc531061c4d90bb3f93027fe259f1703cefb032858b876e74ff" exitCode=0 Feb 19 03:44:46.181786 master-0 kubenswrapper[33867]: I0219 03:44:46.181768 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" event={"ID":"bb9123e1-da52-4f76-96e7-d5a2712ed958","Type":"ContainerDied","Data":"c9cc25a0d7ddccc531061c4d90bb3f93027fe259f1703cefb032858b876e74ff"} Feb 19 03:44:46.451447 master-0 kubenswrapper[33867]: I0219 03:44:46.451386 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:44:46.586731 master-0 kubenswrapper[33867]: I0219 03:44:46.586384 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-vdnxc"] Feb 19 03:44:46.587285 master-0 kubenswrapper[33867]: E0219 03:44:46.587242 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerName="sushy-emulator" Feb 19 03:44:46.587285 master-0 kubenswrapper[33867]: I0219 03:44:46.587285 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerName="sushy-emulator" Feb 19 03:44:46.588184 master-0 kubenswrapper[33867]: I0219 03:44:46.587611 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" containerName="sushy-emulator" Feb 19 03:44:46.588627 master-0 kubenswrapper[33867]: I0219 03:44:46.588600 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.601229 master-0 kubenswrapper[33867]: I0219 03:44:46.601035 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-vdnxc"] Feb 19 03:44:46.604458 master-0 kubenswrapper[33867]: I0219 03:44:46.604375 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config\") pod \"bb9123e1-da52-4f76-96e7-d5a2712ed958\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " Feb 19 03:44:46.604669 master-0 kubenswrapper[33867]: I0219 03:44:46.604493 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk4f6\" (UniqueName: \"kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6\") pod \"bb9123e1-da52-4f76-96e7-d5a2712ed958\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " Feb 19 03:44:46.605239 master-0 kubenswrapper[33867]: I0219 03:44:46.605167 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config\") pod \"bb9123e1-da52-4f76-96e7-d5a2712ed958\" (UID: \"bb9123e1-da52-4f76-96e7-d5a2712ed958\") " Feb 19 03:44:46.610940 master-0 kubenswrapper[33867]: I0219 03:44:46.610898 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "bb9123e1-da52-4f76-96e7-d5a2712ed958" (UID: "bb9123e1-da52-4f76-96e7-d5a2712ed958"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:44:46.612493 master-0 kubenswrapper[33867]: I0219 03:44:46.611431 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "bb9123e1-da52-4f76-96e7-d5a2712ed958" (UID: "bb9123e1-da52-4f76-96e7-d5a2712ed958"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:44:46.614535 master-0 kubenswrapper[33867]: I0219 03:44:46.614488 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6" (OuterVolumeSpecName: "kube-api-access-pk4f6") pod "bb9123e1-da52-4f76-96e7-d5a2712ed958" (UID: "bb9123e1-da52-4f76-96e7-d5a2712ed958"). InnerVolumeSpecName "kube-api-access-pk4f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:44:46.709648 master-0 kubenswrapper[33867]: I0219 03:44:46.709563 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d7de7805-d8e7-4d7e-a17b-36dca355839b-os-client-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.709901 master-0 kubenswrapper[33867]: I0219 03:44:46.709746 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d7de7805-d8e7-4d7e-a17b-36dca355839b-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.710083 master-0 kubenswrapper[33867]: I0219 03:44:46.709962 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l57p\" (UniqueName: \"kubernetes.io/projected/d7de7805-d8e7-4d7e-a17b-36dca355839b-kube-api-access-6l57p\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.710370 master-0 kubenswrapper[33867]: I0219 03:44:46.710306 33867 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bb9123e1-da52-4f76-96e7-d5a2712ed958-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:44:46.710370 master-0 kubenswrapper[33867]: I0219 03:44:46.710342 33867 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/bb9123e1-da52-4f76-96e7-d5a2712ed958-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 19 03:44:46.710370 master-0 kubenswrapper[33867]: I0219 03:44:46.710365 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk4f6\" (UniqueName: \"kubernetes.io/projected/bb9123e1-da52-4f76-96e7-d5a2712ed958-kube-api-access-pk4f6\") on node \"master-0\" DevicePath \"\"" Feb 19 03:44:46.812489 master-0 kubenswrapper[33867]: I0219 03:44:46.812415 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d7de7805-d8e7-4d7e-a17b-36dca355839b-os-client-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.813169 master-0 kubenswrapper[33867]: I0219 03:44:46.812511 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d7de7805-d8e7-4d7e-a17b-36dca355839b-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.813169 master-0 kubenswrapper[33867]: I0219 03:44:46.812614 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l57p\" (UniqueName: \"kubernetes.io/projected/d7de7805-d8e7-4d7e-a17b-36dca355839b-kube-api-access-6l57p\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.813977 master-0 kubenswrapper[33867]: I0219 03:44:46.813947 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d7de7805-d8e7-4d7e-a17b-36dca355839b-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.815819 master-0 kubenswrapper[33867]: I0219 03:44:46.815778 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d7de7805-d8e7-4d7e-a17b-36dca355839b-os-client-config\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:46.835597 master-0 kubenswrapper[33867]: I0219 03:44:46.835536 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l57p\" (UniqueName: \"kubernetes.io/projected/d7de7805-d8e7-4d7e-a17b-36dca355839b-kube-api-access-6l57p\") pod \"sushy-emulator-64488c485f-vdnxc\" (UID: \"d7de7805-d8e7-4d7e-a17b-36dca355839b\") " pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:47.018595 master-0 kubenswrapper[33867]: I0219 03:44:47.018492 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:47.195997 master-0 kubenswrapper[33867]: I0219 03:44:47.195924 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" event={"ID":"bb9123e1-da52-4f76-96e7-d5a2712ed958","Type":"ContainerDied","Data":"ec75c7e08b062744e342832e73a5450840f792f54f57db5874eb0e8882851b28"} Feb 19 03:44:47.196225 master-0 kubenswrapper[33867]: I0219 03:44:47.196013 33867 scope.go:117] "RemoveContainer" containerID="c9cc25a0d7ddccc531061c4d90bb3f93027fe259f1703cefb032858b876e74ff" Feb 19 03:44:47.196225 master-0 kubenswrapper[33867]: I0219 03:44:47.195957 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-vvmrg" Feb 19 03:44:47.238306 master-0 kubenswrapper[33867]: I0219 03:44:47.236407 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:44:47.253142 master-0 kubenswrapper[33867]: I0219 03:44:47.252415 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-vvmrg"] Feb 19 03:44:47.658276 master-0 kubenswrapper[33867]: W0219 03:44:47.658199 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7de7805_d8e7_4d7e_a17b_36dca355839b.slice/crio-e9d743f621b3a8188d4917f9fcfbac32bbda0fd386da091d878e97a7e8a17f96 WatchSource:0}: Error finding container e9d743f621b3a8188d4917f9fcfbac32bbda0fd386da091d878e97a7e8a17f96: Status 404 returned error can't find the container with id e9d743f621b3a8188d4917f9fcfbac32bbda0fd386da091d878e97a7e8a17f96 Feb 19 03:44:47.658893 master-0 kubenswrapper[33867]: I0219 03:44:47.658854 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-vdnxc"] Feb 19 03:44:48.211443 master-0 kubenswrapper[33867]: I0219 03:44:48.211349 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" event={"ID":"d7de7805-d8e7-4d7e-a17b-36dca355839b","Type":"ContainerStarted","Data":"f2f710448744737759041a80d68ad9c7163a555835a82ca4f5cfbf18009833ba"} Feb 19 03:44:48.211443 master-0 kubenswrapper[33867]: I0219 03:44:48.211438 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" event={"ID":"d7de7805-d8e7-4d7e-a17b-36dca355839b","Type":"ContainerStarted","Data":"e9d743f621b3a8188d4917f9fcfbac32bbda0fd386da091d878e97a7e8a17f96"} Feb 19 03:44:48.227807 master-0 kubenswrapper[33867]: I0219 03:44:48.227712 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" podStartSLOduration=2.22769282 podStartE2EDuration="2.22769282s" podCreationTimestamp="2026-02-19 03:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:44:48.225419985 +0000 UTC m=+1293.522090636" watchObservedRunningTime="2026-02-19 03:44:48.22769282 +0000 UTC m=+1293.524363431" Feb 19 03:44:48.969369 master-0 kubenswrapper[33867]: I0219 03:44:48.969284 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9123e1-da52-4f76-96e7-d5a2712ed958" path="/var/lib/kubelet/pods/bb9123e1-da52-4f76-96e7-d5a2712ed958/volumes" Feb 19 03:44:57.020759 master-0 kubenswrapper[33867]: I0219 03:44:57.020186 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:57.020759 master-0 kubenswrapper[33867]: I0219 03:44:57.020260 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:57.031876 master-0 kubenswrapper[33867]: I0219 03:44:57.031782 33867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:44:57.328437 master-0 kubenswrapper[33867]: I0219 03:44:57.328360 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-64488c485f-vdnxc" Feb 19 03:45:00.199025 master-0 kubenswrapper[33867]: I0219 03:45:00.198870 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85"] Feb 19 03:45:00.201532 master-0 kubenswrapper[33867]: I0219 03:45:00.201499 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.203487 master-0 kubenswrapper[33867]: I0219 03:45:00.203428 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 03:45:00.203609 master-0 kubenswrapper[33867]: I0219 03:45:00.203447 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-7hhvr" Feb 19 03:45:00.219376 master-0 kubenswrapper[33867]: I0219 03:45:00.215422 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85"] Feb 19 03:45:00.287836 master-0 kubenswrapper[33867]: I0219 03:45:00.287667 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.288117 master-0 kubenswrapper[33867]: I0219 03:45:00.287914 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl95g\" (UniqueName: \"kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.288366 master-0 kubenswrapper[33867]: I0219 03:45:00.288202 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.391290 master-0 kubenswrapper[33867]: I0219 03:45:00.391191 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.391430 master-0 kubenswrapper[33867]: I0219 03:45:00.391403 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.391758 master-0 kubenswrapper[33867]: I0219 03:45:00.391709 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl95g\" (UniqueName: \"kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.392857 master-0 kubenswrapper[33867]: I0219 03:45:00.392806 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.396785 master-0 kubenswrapper[33867]: I0219 03:45:00.396714 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.409245 master-0 kubenswrapper[33867]: I0219 03:45:00.409139 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl95g\" (UniqueName: \"kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g\") pod \"collect-profiles-29524545-gdm85\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:00.547719 master-0 kubenswrapper[33867]: I0219 03:45:00.547509 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:01.030968 master-0 kubenswrapper[33867]: I0219 03:45:01.030859 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85"] Feb 19 03:45:01.033227 master-0 kubenswrapper[33867]: W0219 03:45:01.033159 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c006fe0_10a4_4b5c_add1_75218d551574.slice/crio-d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07 WatchSource:0}: Error finding container d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07: Status 404 returned error can't find the container with id d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07 Feb 19 03:45:01.389226 master-0 kubenswrapper[33867]: I0219 03:45:01.389151 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" event={"ID":"9c006fe0-10a4-4b5c-add1-75218d551574","Type":"ContainerStarted","Data":"d66e814ed81e62693bc918e0fdd32ea21323ce1b2fd1f13f9f147b13f12b474a"} Feb 19 03:45:01.389226 master-0 kubenswrapper[33867]: I0219 03:45:01.389214 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" event={"ID":"9c006fe0-10a4-4b5c-add1-75218d551574","Type":"ContainerStarted","Data":"d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07"} Feb 19 03:45:01.423106 master-0 kubenswrapper[33867]: I0219 03:45:01.423002 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" podStartSLOduration=1.422979449 podStartE2EDuration="1.422979449s" podCreationTimestamp="2026-02-19 03:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 03:45:01.414981032 +0000 UTC m=+1306.711651643" watchObservedRunningTime="2026-02-19 03:45:01.422979449 +0000 UTC m=+1306.719650070" Feb 19 03:45:01.800792 master-0 kubenswrapper[33867]: E0219 03:45:01.800601 33867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c006fe0_10a4_4b5c_add1_75218d551574.slice/crio-conmon-d66e814ed81e62693bc918e0fdd32ea21323ce1b2fd1f13f9f147b13f12b474a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c006fe0_10a4_4b5c_add1_75218d551574.slice/crio-d66e814ed81e62693bc918e0fdd32ea21323ce1b2fd1f13f9f147b13f12b474a.scope\": RecentStats: unable to find data in memory cache]" Feb 19 03:45:02.406598 master-0 kubenswrapper[33867]: I0219 03:45:02.406529 33867 generic.go:334] "Generic (PLEG): container finished" podID="9c006fe0-10a4-4b5c-add1-75218d551574" containerID="d66e814ed81e62693bc918e0fdd32ea21323ce1b2fd1f13f9f147b13f12b474a" exitCode=0 Feb 19 03:45:02.407275 master-0 kubenswrapper[33867]: I0219 03:45:02.406695 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" event={"ID":"9c006fe0-10a4-4b5c-add1-75218d551574","Type":"ContainerDied","Data":"d66e814ed81e62693bc918e0fdd32ea21323ce1b2fd1f13f9f147b13f12b474a"} Feb 19 03:45:03.886190 master-0 kubenswrapper[33867]: I0219 03:45:03.886122 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:45:03.983943 master-0 kubenswrapper[33867]: I0219 03:45:03.983317 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl95g\" (UniqueName: \"kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g\") pod \"9c006fe0-10a4-4b5c-add1-75218d551574\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " Feb 19 03:45:03.983943 master-0 kubenswrapper[33867]: I0219 03:45:03.983554 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume\") pod \"9c006fe0-10a4-4b5c-add1-75218d551574\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " Feb 19 03:45:03.983943 master-0 kubenswrapper[33867]: I0219 03:45:03.983658 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume\") pod \"9c006fe0-10a4-4b5c-add1-75218d551574\" (UID: \"9c006fe0-10a4-4b5c-add1-75218d551574\") " Feb 19 03:45:03.986064 master-0 kubenswrapper[33867]: I0219 03:45:03.984739 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c006fe0-10a4-4b5c-add1-75218d551574" (UID: "9c006fe0-10a4-4b5c-add1-75218d551574"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 03:45:03.987879 master-0 kubenswrapper[33867]: I0219 03:45:03.986397 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g" (OuterVolumeSpecName: "kube-api-access-pl95g") pod "9c006fe0-10a4-4b5c-add1-75218d551574" (UID: "9c006fe0-10a4-4b5c-add1-75218d551574"). InnerVolumeSpecName "kube-api-access-pl95g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 03:45:03.990814 master-0 kubenswrapper[33867]: I0219 03:45:03.990180 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c006fe0-10a4-4b5c-add1-75218d551574" (UID: "9c006fe0-10a4-4b5c-add1-75218d551574"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 03:45:04.087004 master-0 kubenswrapper[33867]: I0219 03:45:04.086946 33867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c006fe0-10a4-4b5c-add1-75218d551574-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:45:04.087004 master-0 kubenswrapper[33867]: I0219 03:45:04.086993 33867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c006fe0-10a4-4b5c-add1-75218d551574-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 03:45:04.087004 master-0 kubenswrapper[33867]: I0219 03:45:04.087005 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl95g\" (UniqueName: \"kubernetes.io/projected/9c006fe0-10a4-4b5c-add1-75218d551574-kube-api-access-pl95g\") on node \"master-0\" DevicePath \"\"" Feb 19 03:45:04.441388 master-0 kubenswrapper[33867]: I0219 03:45:04.441308 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" event={"ID":"9c006fe0-10a4-4b5c-add1-75218d551574","Type":"ContainerDied","Data":"d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07"} Feb 19 03:45:04.441661 master-0 kubenswrapper[33867]: I0219 03:45:04.441407 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d368327c4a51c53656869968b6bdb7e2416ebed895e7c1f899f0e6c6cec74a07" Feb 19 03:45:04.441661 master-0 kubenswrapper[33867]: I0219 03:45:04.441505 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85" Feb 19 03:46:22.548729 master-0 kubenswrapper[33867]: I0219 03:46:22.548640 33867 scope.go:117] "RemoveContainer" containerID="206d5c31243b738552d8316eef6e6a53d8450a39441aac33e7dbd8d8724fc3ff" Feb 19 03:46:22.581341 master-0 kubenswrapper[33867]: I0219 03:46:22.581240 33867 scope.go:117] "RemoveContainer" containerID="880c1f22fc7be92cdd44ae4a3742c7896ff0d350c063a16e68f6697282b2e85f" Feb 19 03:46:22.663361 master-0 kubenswrapper[33867]: I0219 03:46:22.663282 33867 scope.go:117] "RemoveContainer" containerID="a3befb830c3ff3540a81d0b4338b0976abb156b99b500bafa91a08e94f701314" Feb 19 03:46:22.722477 master-0 kubenswrapper[33867]: I0219 03:46:22.722395 33867 scope.go:117] "RemoveContainer" containerID="30be8ded34fe08ac229762a1d55e716fcd25b02275e2331e3f6a9f4e5494377c" Feb 19 03:47:22.837092 master-0 kubenswrapper[33867]: I0219 03:47:22.836965 33867 scope.go:117] "RemoveContainer" containerID="96e63fb6a3a0517f7dc81e5e72756aa7a3d4b35a30f9008e95d266c5d42bc56f" Feb 19 03:49:38.061143 master-0 kubenswrapper[33867]: I0219 03:49:38.061065 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fdbk4"] Feb 19 03:49:38.080746 master-0 kubenswrapper[33867]: I0219 03:49:38.080682 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fdbk4"] Feb 19 03:49:38.973468 master-0 kubenswrapper[33867]: I0219 03:49:38.973390 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d69938-9e32-4f94-afcd-db24ad9fde34" path="/var/lib/kubelet/pods/28d69938-9e32-4f94-afcd-db24ad9fde34/volumes" Feb 19 03:49:39.043852 master-0 kubenswrapper[33867]: I0219 03:49:39.043799 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j9b2d"] Feb 19 03:49:39.062435 master-0 kubenswrapper[33867]: I0219 03:49:39.062306 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3d8b-account-create-update-h4wh9"] Feb 19 03:49:39.075315 master-0 kubenswrapper[33867]: I0219 03:49:39.075225 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j9b2d"] Feb 19 03:49:39.086991 master-0 kubenswrapper[33867]: I0219 03:49:39.086919 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8a6d-account-create-update-2gsvr"] Feb 19 03:49:39.099316 master-0 kubenswrapper[33867]: I0219 03:49:39.099224 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3d8b-account-create-update-h4wh9"] Feb 19 03:49:39.111669 master-0 kubenswrapper[33867]: I0219 03:49:39.111540 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8a6d-account-create-update-2gsvr"] Feb 19 03:49:40.975164 master-0 kubenswrapper[33867]: I0219 03:49:40.975099 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe3535d-e926-4941-ac29-a9af927e1fd9" path="/var/lib/kubelet/pods/4fe3535d-e926-4941-ac29-a9af927e1fd9/volumes" Feb 19 03:49:40.975880 master-0 kubenswrapper[33867]: I0219 03:49:40.975838 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e3386c-4280-492d-b87c-f6d9ae925f35" path="/var/lib/kubelet/pods/68e3386c-4280-492d-b87c-f6d9ae925f35/volumes" Feb 19 03:49:40.976473 master-0 kubenswrapper[33867]: I0219 03:49:40.976441 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b938784f-b544-4020-a421-1d886966170c" path="/var/lib/kubelet/pods/b938784f-b544-4020-a421-1d886966170c/volumes" Feb 19 03:49:42.051142 master-0 kubenswrapper[33867]: I0219 03:49:42.051047 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nzmld"] Feb 19 03:49:42.063614 master-0 kubenswrapper[33867]: I0219 03:49:42.063544 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8e36-account-create-update-kvwtv"] Feb 19 03:49:42.075232 master-0 kubenswrapper[33867]: I0219 03:49:42.075177 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nzmld"] Feb 19 03:49:42.086093 master-0 kubenswrapper[33867]: I0219 03:49:42.086018 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8e36-account-create-update-kvwtv"] Feb 19 03:49:42.969854 master-0 kubenswrapper[33867]: I0219 03:49:42.969783 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8758db66-f063-425c-b8a4-3c6b519d7775" path="/var/lib/kubelet/pods/8758db66-f063-425c-b8a4-3c6b519d7775/volumes" Feb 19 03:49:42.970398 master-0 kubenswrapper[33867]: I0219 03:49:42.970369 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5752474-93c5-40bc-b4c5-ac1fb797a211" path="/var/lib/kubelet/pods/e5752474-93c5-40bc-b4c5-ac1fb797a211/volumes" Feb 19 03:50:00.072292 master-0 kubenswrapper[33867]: I0219 03:50:00.072057 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j8t8n"] Feb 19 03:50:00.089982 master-0 kubenswrapper[33867]: I0219 03:50:00.089880 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j8t8n"] Feb 19 03:50:00.971518 master-0 kubenswrapper[33867]: I0219 03:50:00.971460 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f19ad3-7091-420d-8d57-8ee226e6930a" path="/var/lib/kubelet/pods/32f19ad3-7091-420d-8d57-8ee226e6930a/volumes" Feb 19 03:50:09.050279 master-0 kubenswrapper[33867]: I0219 03:50:09.050177 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ggcz5"] Feb 19 03:50:09.073288 master-0 kubenswrapper[33867]: I0219 03:50:09.065438 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ggcz5"] Feb 19 03:50:10.972493 master-0 kubenswrapper[33867]: I0219 03:50:10.972410 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb02f35-95af-4c12-b5c6-d936cddcbf51" path="/var/lib/kubelet/pods/fdb02f35-95af-4c12-b5c6-d936cddcbf51/volumes" Feb 19 03:50:18.057629 master-0 kubenswrapper[33867]: I0219 03:50:18.057534 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f8sf9"] Feb 19 03:50:18.071242 master-0 kubenswrapper[33867]: I0219 03:50:18.071172 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f8sf9"] Feb 19 03:50:18.976715 master-0 kubenswrapper[33867]: I0219 03:50:18.976650 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4535337-2a9c-4883-b1c4-f3b066d521e6" path="/var/lib/kubelet/pods/c4535337-2a9c-4883-b1c4-f3b066d521e6/volumes" Feb 19 03:50:22.086289 master-0 kubenswrapper[33867]: I0219 03:50:22.083351 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dcdf-account-create-update-5j6ts"] Feb 19 03:50:22.105963 master-0 kubenswrapper[33867]: I0219 03:50:22.105888 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f7f8-account-create-update-r5x64"] Feb 19 03:50:22.120344 master-0 kubenswrapper[33867]: I0219 03:50:22.120288 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dcdf-account-create-update-5j6ts"] Feb 19 03:50:22.137872 master-0 kubenswrapper[33867]: I0219 03:50:22.137810 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-scqnr"] Feb 19 03:50:22.157286 master-0 kubenswrapper[33867]: I0219 03:50:22.157217 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-scqnr"] Feb 19 03:50:22.175275 master-0 kubenswrapper[33867]: I0219 03:50:22.172515 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f7f8-account-create-update-r5x64"] Feb 19 03:50:22.972596 master-0 kubenswrapper[33867]: I0219 03:50:22.972370 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815c75b3-ac10-40a0-8467-8a168c2ff550" path="/var/lib/kubelet/pods/815c75b3-ac10-40a0-8467-8a168c2ff550/volumes" Feb 19 03:50:22.973862 master-0 kubenswrapper[33867]: I0219 03:50:22.972986 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5af8566-34bb-49b8-821d-8b3c4d1aeb21" path="/var/lib/kubelet/pods/a5af8566-34bb-49b8-821d-8b3c4d1aeb21/volumes" Feb 19 03:50:22.973862 master-0 kubenswrapper[33867]: I0219 03:50:22.973762 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f772a6-d473-476d-bd72-4af600e017bf" path="/var/lib/kubelet/pods/a5f772a6-d473-476d-bd72-4af600e017bf/volumes" Feb 19 03:50:23.005607 master-0 kubenswrapper[33867]: I0219 03:50:23.005536 33867 scope.go:117] "RemoveContainer" containerID="d1072ee730f646ef2d1e47eafdf25e50e3f661e876948f9f506020eae9fa8722" Feb 19 03:50:23.036437 master-0 kubenswrapper[33867]: I0219 03:50:23.036098 33867 scope.go:117] "RemoveContainer" containerID="347e5e08227fc9feb7ad5a2dcaa40fc017776f6646f40cdbcd278fcf9d499e5c" Feb 19 03:50:23.099491 master-0 kubenswrapper[33867]: I0219 03:50:23.099431 33867 scope.go:117] "RemoveContainer" containerID="d5921387d77b1b1d4d721e164a9c0b87d2bba12285b5a9f8a9815015d047386b" Feb 19 03:50:23.148197 master-0 kubenswrapper[33867]: I0219 03:50:23.148121 33867 scope.go:117] "RemoveContainer" containerID="8a7125323eb472c003a3405cbff1282d27339bd6440c527e9ed93cff3a27a964" Feb 19 03:50:23.237404 master-0 kubenswrapper[33867]: I0219 03:50:23.237369 33867 scope.go:117] "RemoveContainer" containerID="dccf318d2ba35240729b2ebea5a3fe06c080e75ca80ff6c38921fb581d6a2b20" Feb 19 03:50:23.285663 master-0 kubenswrapper[33867]: I0219 03:50:23.285603 33867 scope.go:117] "RemoveContainer" containerID="6ddfd6bd4e3bee2a03f0cb0b73eb42597059996dadbd7af16d50234aaf8d3e9c" Feb 19 03:50:23.341951 master-0 kubenswrapper[33867]: I0219 03:50:23.341790 33867 scope.go:117] "RemoveContainer" containerID="a92202a91b03132c07cbb5e8bb6ff218814869f12b2b03f84bc9a7348fdb4e71" Feb 19 03:50:23.370444 master-0 kubenswrapper[33867]: I0219 03:50:23.370390 33867 scope.go:117] "RemoveContainer" containerID="445eef0f01e829b411d52965f5d442419faf3eb0b6d103d75d5df3bde27ef6d3" Feb 19 03:50:23.397924 master-0 kubenswrapper[33867]: I0219 03:50:23.397856 33867 scope.go:117] "RemoveContainer" containerID="bcea34297c5df201a2ed94d6ba62e5fcaf5246b202f6e5fe5609379505454580" Feb 19 03:50:23.428883 master-0 kubenswrapper[33867]: I0219 03:50:23.428838 33867 scope.go:117] "RemoveContainer" containerID="69ded7c6d4e31baa8649c379a1704c0f8d302f46777238996c09c9500fb0c94f" Feb 19 03:50:23.458269 master-0 kubenswrapper[33867]: I0219 03:50:23.458087 33867 scope.go:117] "RemoveContainer" containerID="603c2ca1b1f7567e4c614f3f791f9fac4b5b6b3ae5745160f6c0a3f7fc2fb736" Feb 19 03:50:23.482141 master-0 kubenswrapper[33867]: I0219 03:50:23.482084 33867 scope.go:117] "RemoveContainer" containerID="eb9f0acfaeaed9258806140fb6ca98f4342cb34f558dcb95cd790bccb2aa1683" Feb 19 03:50:28.053101 master-0 kubenswrapper[33867]: I0219 03:50:28.053009 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ctljd"] Feb 19 03:50:28.074968 master-0 kubenswrapper[33867]: I0219 03:50:28.074871 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ctljd"] Feb 19 03:50:28.967908 master-0 kubenswrapper[33867]: I0219 03:50:28.967835 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a336761-686b-44e6-b441-b76aebf36dba" path="/var/lib/kubelet/pods/5a336761-686b-44e6-b441-b76aebf36dba/volumes" Feb 19 03:50:34.052668 master-0 kubenswrapper[33867]: I0219 03:50:34.052374 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-b7dmh"] Feb 19 03:50:34.073441 master-0 kubenswrapper[33867]: I0219 03:50:34.072808 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-b7dmh"] Feb 19 03:50:34.973574 master-0 kubenswrapper[33867]: I0219 03:50:34.973516 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62650dfe-cc8e-4ee2-8926-d9a80610d90c" path="/var/lib/kubelet/pods/62650dfe-cc8e-4ee2-8926-d9a80610d90c/volumes" Feb 19 03:50:37.311606 master-0 kubenswrapper[33867]: I0219 03:50:37.311542 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-12f5-account-create-update-ch74c"] Feb 19 03:50:37.325753 master-0 kubenswrapper[33867]: I0219 03:50:37.325681 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-12f5-account-create-update-ch74c"] Feb 19 03:50:38.971104 master-0 kubenswrapper[33867]: I0219 03:50:38.971041 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d74122-a24a-4d79-acd2-6071763c2d3e" path="/var/lib/kubelet/pods/98d74122-a24a-4d79-acd2-6071763c2d3e/volumes" Feb 19 03:50:51.133842 master-0 kubenswrapper[33867]: I0219 03:50:51.133770 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2fmpd"] Feb 19 03:50:51.150601 master-0 kubenswrapper[33867]: I0219 03:50:51.150320 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2fmpd"] Feb 19 03:50:52.971757 master-0 kubenswrapper[33867]: I0219 03:50:52.971694 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8737f70a-6ee7-4124-a049-aefd62a7b446" path="/var/lib/kubelet/pods/8737f70a-6ee7-4124-a049-aefd62a7b446/volumes" Feb 19 03:50:58.049609 master-0 kubenswrapper[33867]: I0219 03:50:58.049515 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-79nl9"] Feb 19 03:50:58.064167 master-0 kubenswrapper[33867]: I0219 03:50:58.064081 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-79nl9"] Feb 19 03:50:58.982547 master-0 kubenswrapper[33867]: I0219 03:50:58.981484 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb720f5-9fcb-4763-b481-5feb7cc0d395" path="/var/lib/kubelet/pods/5cb720f5-9fcb-4763-b481-5feb7cc0d395/volumes" Feb 19 03:51:02.058710 master-0 kubenswrapper[33867]: I0219 03:51:02.058610 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054a4-db-sync-hjrc5"] Feb 19 03:51:02.073458 master-0 kubenswrapper[33867]: I0219 03:51:02.073383 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054a4-db-sync-hjrc5"] Feb 19 03:51:02.970328 master-0 kubenswrapper[33867]: I0219 03:51:02.970240 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c64d242-8a65-449e-b014-dc5fc42878e2" path="/var/lib/kubelet/pods/4c64d242-8a65-449e-b014-dc5fc42878e2/volumes" Feb 19 03:51:04.046902 master-0 kubenswrapper[33867]: I0219 03:51:04.046739 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cwnd9"] Feb 19 03:51:04.068559 master-0 kubenswrapper[33867]: I0219 03:51:04.068471 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cwnd9"] Feb 19 03:51:04.983783 master-0 kubenswrapper[33867]: I0219 03:51:04.983669 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b067fa1c-719d-41db-a4be-d5d7d1125a67" path="/var/lib/kubelet/pods/b067fa1c-719d-41db-a4be-d5d7d1125a67/volumes" Feb 19 03:51:17.058783 master-0 kubenswrapper[33867]: I0219 03:51:17.058458 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-lr9n7"] Feb 19 03:51:17.080573 master-0 kubenswrapper[33867]: I0219 03:51:17.080445 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-lr9n7"] Feb 19 03:51:18.975322 master-0 kubenswrapper[33867]: I0219 03:51:18.975193 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ede5f4-a9ae-46ab-a72c-6575bb04274e" path="/var/lib/kubelet/pods/52ede5f4-a9ae-46ab-a72c-6575bb04274e/volumes" Feb 19 03:51:23.784832 master-0 kubenswrapper[33867]: I0219 03:51:23.784789 33867 scope.go:117] "RemoveContainer" containerID="02de6761ebd5c08cf3e8572c3f2af8d4010bdf840502a627e7c45b51ff211373" Feb 19 03:51:23.811987 master-0 kubenswrapper[33867]: I0219 03:51:23.811941 33867 scope.go:117] "RemoveContainer" containerID="bdc0afb9bfa2ca5fb826283db2cd7262c208127f9350529c50ad559a12dbc648" Feb 19 03:51:23.881089 master-0 kubenswrapper[33867]: I0219 03:51:23.881024 33867 scope.go:117] "RemoveContainer" containerID="eaa1796402746dafcd60dfab5ccc98b8c155d7252eb59114005de4955ca53483" Feb 19 03:51:23.954810 master-0 kubenswrapper[33867]: I0219 03:51:23.954522 33867 scope.go:117] "RemoveContainer" containerID="d4d68324cbf3d5d95dbb06b27c1427136717b42f247eef3684b268e8fc5d9241" Feb 19 03:51:24.004428 master-0 kubenswrapper[33867]: I0219 03:51:24.004394 33867 scope.go:117] "RemoveContainer" containerID="26d5e9d0505e933d1ecf14b6e568c00321787d92e14d5bb4510ed17cb6c57a1e" Feb 19 03:51:24.061610 master-0 kubenswrapper[33867]: I0219 03:51:24.061570 33867 scope.go:117] "RemoveContainer" containerID="59978e481f8873b5dac7b2a92084e0ba9f3ec221397112dc70dc271d4b647d2c" Feb 19 03:51:24.114086 master-0 kubenswrapper[33867]: I0219 03:51:24.113819 33867 scope.go:117] "RemoveContainer" containerID="afc1fd4f48865a81f7399bac483d4183b5e64fcaa74d0591fa04a876304b9931" Feb 19 03:51:24.145188 master-0 kubenswrapper[33867]: I0219 03:51:24.143920 33867 scope.go:117] "RemoveContainer" containerID="b2ba44abc1386dc028a3c98d31fe9c8fe407e33d34bb426a05961ab500612f4d" Feb 19 03:51:24.222683 master-0 kubenswrapper[33867]: I0219 03:51:24.222637 33867 scope.go:117] "RemoveContainer" containerID="ffe793697dc15f4876837918fb80731a301cbf5972feb45dc2376ea0bb9619c4" Feb 19 03:51:25.059954 master-0 kubenswrapper[33867]: I0219 03:51:25.059849 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-4nkcc"] Feb 19 03:51:25.075377 master-0 kubenswrapper[33867]: I0219 03:51:25.075323 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-62af-account-create-update-7qh7b"] Feb 19 03:51:25.088810 master-0 kubenswrapper[33867]: I0219 03:51:25.088752 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-4nkcc"] Feb 19 03:51:25.103613 master-0 kubenswrapper[33867]: I0219 03:51:25.102106 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-62af-account-create-update-7qh7b"] Feb 19 03:51:26.973792 master-0 kubenswrapper[33867]: I0219 03:51:26.973699 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b00abf9-7737-4850-a303-979795c4b0a3" path="/var/lib/kubelet/pods/4b00abf9-7737-4850-a303-979795c4b0a3/volumes" Feb 19 03:51:26.974833 master-0 kubenswrapper[33867]: I0219 03:51:26.974810 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8580d959-3bd3-4893-8c87-9376d87cba49" path="/var/lib/kubelet/pods/8580d959-3bd3-4893-8c87-9376d87cba49/volumes" Feb 19 03:51:53.068673 master-0 kubenswrapper[33867]: I0219 03:51:53.067348 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-360e-account-create-update-mwmgf"] Feb 19 03:51:53.081788 master-0 kubenswrapper[33867]: I0219 03:51:53.081709 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-74msg"] Feb 19 03:51:53.094822 master-0 kubenswrapper[33867]: I0219 03:51:53.094001 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ab43-account-create-update-jwqxb"] Feb 19 03:51:53.107815 master-0 kubenswrapper[33867]: I0219 03:51:53.107718 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1db7-account-create-update-kprcb"] Feb 19 03:51:53.120201 master-0 kubenswrapper[33867]: I0219 03:51:53.120131 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-k2929"] Feb 19 03:51:53.132473 master-0 kubenswrapper[33867]: I0219 03:51:53.132398 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-nrrkp"] Feb 19 03:51:53.145139 master-0 kubenswrapper[33867]: I0219 03:51:53.145060 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vv24r"] Feb 19 03:51:53.155369 master-0 kubenswrapper[33867]: I0219 03:51:53.155320 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-360e-account-create-update-mwmgf"] Feb 19 03:51:53.165645 master-0 kubenswrapper[33867]: I0219 03:51:53.165566 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1db7-account-create-update-kprcb"] Feb 19 03:51:53.176118 master-0 kubenswrapper[33867]: I0219 03:51:53.176041 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-k2929"] Feb 19 03:51:53.187645 master-0 kubenswrapper[33867]: I0219 03:51:53.187588 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-74msg"] Feb 19 03:51:53.202509 master-0 kubenswrapper[33867]: I0219 03:51:53.202434 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ab43-account-create-update-jwqxb"] Feb 19 03:51:53.217306 master-0 kubenswrapper[33867]: I0219 03:51:53.217228 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-nrrkp"] Feb 19 03:51:53.231978 master-0 kubenswrapper[33867]: I0219 03:51:53.231158 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vv24r"] Feb 19 03:51:54.990778 master-0 kubenswrapper[33867]: I0219 03:51:54.990664 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afeaeae-53cb-4753-8240-ed7c0a892395" path="/var/lib/kubelet/pods/2afeaeae-53cb-4753-8240-ed7c0a892395/volumes" Feb 19 03:51:54.992178 master-0 kubenswrapper[33867]: I0219 03:51:54.992120 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e338259-396c-42e3-9a9d-235ec62fb521" path="/var/lib/kubelet/pods/4e338259-396c-42e3-9a9d-235ec62fb521/volumes" Feb 19 03:51:54.993665 master-0 kubenswrapper[33867]: I0219 03:51:54.993623 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed5cbcb-0a9e-4561-b21e-0c84b806e725" path="/var/lib/kubelet/pods/6ed5cbcb-0a9e-4561-b21e-0c84b806e725/volumes" Feb 19 03:51:54.994504 master-0 kubenswrapper[33867]: I0219 03:51:54.994466 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765534b3-48eb-4db3-9413-fbe831f2bf9f" path="/var/lib/kubelet/pods/765534b3-48eb-4db3-9413-fbe831f2bf9f/volumes" Feb 19 03:51:54.995486 master-0 kubenswrapper[33867]: I0219 03:51:54.995448 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a62e19-1f19-49fd-b843-eafb8bc78662" path="/var/lib/kubelet/pods/92a62e19-1f19-49fd-b843-eafb8bc78662/volumes" Feb 19 03:51:54.998380 master-0 kubenswrapper[33867]: I0219 03:51:54.998316 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c772151f-fa4c-44ae-8d31-3e53872c20e7" path="/var/lib/kubelet/pods/c772151f-fa4c-44ae-8d31-3e53872c20e7/volumes" Feb 19 03:51:55.000296 master-0 kubenswrapper[33867]: I0219 03:51:55.000222 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8fffe4-e342-4016-a543-c65edd216c52" path="/var/lib/kubelet/pods/de8fffe4-e342-4016-a543-c65edd216c52/volumes" Feb 19 03:52:24.429199 master-0 kubenswrapper[33867]: I0219 03:52:24.429094 33867 scope.go:117] "RemoveContainer" containerID="0000f4af2d1f1eb5dcf02bd517f26339cea715a82ade4523994de3a86922c3fe" Feb 19 03:52:24.476051 master-0 kubenswrapper[33867]: I0219 03:52:24.475952 33867 scope.go:117] "RemoveContainer" containerID="abaee50973a80a362a798731ce0802ec29104a410488ef8a45f9ffbf5fbb5e0d" Feb 19 03:52:24.543303 master-0 kubenswrapper[33867]: I0219 03:52:24.543232 33867 scope.go:117] "RemoveContainer" containerID="30d326181de156200152b9cb491c5899ec1eafd983b5686bc1f94e01869f0def" Feb 19 03:52:24.590580 master-0 kubenswrapper[33867]: I0219 03:52:24.590505 33867 scope.go:117] "RemoveContainer" containerID="5371ab098b84ec475c9fadb1fa5f73ece91d9af7e61bfb520da553bd8c87c722" Feb 19 03:52:24.649791 master-0 kubenswrapper[33867]: I0219 03:52:24.649323 33867 scope.go:117] "RemoveContainer" containerID="7205fa93093ff42d5c1fb033abaea407b8d51f70447e25e46434afc0b7cd08fa" Feb 19 03:52:24.687534 master-0 kubenswrapper[33867]: I0219 03:52:24.687472 33867 scope.go:117] "RemoveContainer" containerID="cd18f4f021a44060dd7dc69108acdb0267f11edc3f9c4b05e01d997d55d3da13" Feb 19 03:52:24.745423 master-0 kubenswrapper[33867]: I0219 03:52:24.745121 33867 scope.go:117] "RemoveContainer" containerID="90ed69c5c72dcbda55a591257555aed331ced0e416d17dac84d097dc8e15aaed" Feb 19 03:52:24.773085 master-0 kubenswrapper[33867]: I0219 03:52:24.773003 33867 scope.go:117] "RemoveContainer" containerID="5aeebf88311f5d83c0bf9a90159f062702907d4c92ba9d7d4538b4971994e9bb" Feb 19 03:52:24.803643 master-0 kubenswrapper[33867]: I0219 03:52:24.803603 33867 scope.go:117] "RemoveContainer" containerID="67c9f952d452920777d202739f9534332a26b234c7b532a036c0f705ea898107" Feb 19 03:52:49.093653 master-0 kubenswrapper[33867]: I0219 03:52:49.093141 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nt89l"] Feb 19 03:52:49.120361 master-0 kubenswrapper[33867]: I0219 03:52:49.119881 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nt89l"] Feb 19 03:52:50.982200 master-0 kubenswrapper[33867]: I0219 03:52:50.982007 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89845d0a-587f-448f-802a-16572691093c" path="/var/lib/kubelet/pods/89845d0a-587f-448f-802a-16572691093c/volumes" Feb 19 03:53:13.053090 master-0 kubenswrapper[33867]: I0219 03:53:13.052980 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-548gx"] Feb 19 03:53:13.070141 master-0 kubenswrapper[33867]: I0219 03:53:13.069896 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-548gx"] Feb 19 03:53:14.067296 master-0 kubenswrapper[33867]: I0219 03:53:14.057174 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-47sq4"] Feb 19 03:53:14.074293 master-0 kubenswrapper[33867]: I0219 03:53:14.070567 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-47sq4"] Feb 19 03:53:14.978052 master-0 kubenswrapper[33867]: I0219 03:53:14.977982 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a82b2c2-4eab-407e-a67e-07ecc654db86" path="/var/lib/kubelet/pods/2a82b2c2-4eab-407e-a67e-07ecc654db86/volumes" Feb 19 03:53:14.979614 master-0 kubenswrapper[33867]: I0219 03:53:14.979588 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ac7e83-4fa3-459f-aa01-c4c5950264f0" path="/var/lib/kubelet/pods/e3ac7e83-4fa3-459f-aa01-c4c5950264f0/volumes" Feb 19 03:53:25.101581 master-0 kubenswrapper[33867]: I0219 03:53:25.101522 33867 scope.go:117] "RemoveContainer" containerID="183c84739896a2a05db42b3b58f40c7fd5146e6e44e8ebcd7dc0af1107227754" Feb 19 03:53:25.163130 master-0 kubenswrapper[33867]: I0219 03:53:25.163059 33867 scope.go:117] "RemoveContainer" containerID="2095aca31c73a2200bb902b29ae7ea255905655c0cc23ac246cbeb8321f223ce" Feb 19 03:53:25.220179 master-0 kubenswrapper[33867]: I0219 03:53:25.218855 33867 scope.go:117] "RemoveContainer" containerID="2d88822f7aeaf366f49d7cc01d5ad974851322c8a543dff23f5c1c32aa47c5a1" Feb 19 03:53:51.081916 master-0 kubenswrapper[33867]: I0219 03:53:51.081705 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-x6cl9"] Feb 19 03:53:51.092432 master-0 kubenswrapper[33867]: I0219 03:53:51.092340 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-x6cl9"] Feb 19 03:53:52.968783 master-0 kubenswrapper[33867]: I0219 03:53:52.968445 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2540117-66a4-4bde-80ce-e1c15c51b076" path="/var/lib/kubelet/pods/f2540117-66a4-4bde-80ce-e1c15c51b076/volumes" Feb 19 03:53:53.068093 master-0 kubenswrapper[33867]: I0219 03:53:53.067768 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhrf8"] Feb 19 03:53:53.080712 master-0 kubenswrapper[33867]: I0219 03:53:53.080637 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhrf8"] Feb 19 03:53:54.983081 master-0 kubenswrapper[33867]: I0219 03:53:54.983015 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72d9962-afbf-436d-9250-37b6ae7f252d" path="/var/lib/kubelet/pods/d72d9962-afbf-436d-9250-37b6ae7f252d/volumes" Feb 19 03:54:25.381454 master-0 kubenswrapper[33867]: I0219 03:54:25.381350 33867 scope.go:117] "RemoveContainer" containerID="d45ee3fba32f135b55d03d03520c7c53e77d331357ff2f4091088182cc20afee" Feb 19 03:54:25.433184 master-0 kubenswrapper[33867]: I0219 03:54:25.433115 33867 scope.go:117] "RemoveContainer" containerID="972c1b384cd84e7e036a42b112d4ec39ba13e359aa04f55f923d9cc8f8e22ad6" Feb 19 04:00:00.165040 master-0 kubenswrapper[33867]: I0219 04:00:00.164967 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd"] Feb 19 04:00:00.165689 master-0 kubenswrapper[33867]: E0219 04:00:00.165588 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c006fe0-10a4-4b5c-add1-75218d551574" containerName="collect-profiles" Feb 19 04:00:00.165689 master-0 kubenswrapper[33867]: I0219 04:00:00.165604 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c006fe0-10a4-4b5c-add1-75218d551574" containerName="collect-profiles" Feb 19 04:00:00.165929 master-0 kubenswrapper[33867]: I0219 04:00:00.165906 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c006fe0-10a4-4b5c-add1-75218d551574" containerName="collect-profiles" Feb 19 04:00:00.168396 master-0 kubenswrapper[33867]: I0219 04:00:00.168359 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.171541 master-0 kubenswrapper[33867]: I0219 04:00:00.171491 33867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-7hhvr" Feb 19 04:00:00.171658 master-0 kubenswrapper[33867]: I0219 04:00:00.171499 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 04:00:00.178397 master-0 kubenswrapper[33867]: I0219 04:00:00.178204 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd"] Feb 19 04:00:00.275273 master-0 kubenswrapper[33867]: I0219 04:00:00.275157 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wpf\" (UniqueName: \"kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.275509 master-0 kubenswrapper[33867]: I0219 04:00:00.275424 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.275670 master-0 kubenswrapper[33867]: I0219 04:00:00.275635 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.379053 master-0 kubenswrapper[33867]: I0219 04:00:00.378982 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98wpf\" (UniqueName: \"kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.379248 master-0 kubenswrapper[33867]: I0219 04:00:00.379069 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.379375 master-0 kubenswrapper[33867]: I0219 04:00:00.379321 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.380392 master-0 kubenswrapper[33867]: I0219 04:00:00.380360 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.383134 master-0 kubenswrapper[33867]: I0219 04:00:00.383041 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.406087 master-0 kubenswrapper[33867]: I0219 04:00:00.406016 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98wpf\" (UniqueName: \"kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf\") pod \"collect-profiles-29524560-m9mdd\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.492482 master-0 kubenswrapper[33867]: I0219 04:00:00.492439 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:00.982792 master-0 kubenswrapper[33867]: W0219 04:00:00.982724 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e628432_bcdd_424c_823c_b87d45b58936.slice/crio-bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a WatchSource:0}: Error finding container bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a: Status 404 returned error can't find the container with id bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a Feb 19 04:00:00.983812 master-0 kubenswrapper[33867]: I0219 04:00:00.983735 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd"] Feb 19 04:00:01.223607 master-0 kubenswrapper[33867]: I0219 04:00:01.223516 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" event={"ID":"8e628432-bcdd-424c-823c-b87d45b58936","Type":"ContainerStarted","Data":"b16f5fd15c5dd608b42e101de638319e9500f99346ebaf5506e1d05e69d2efd9"} Feb 19 04:00:01.223607 master-0 kubenswrapper[33867]: I0219 04:00:01.223585 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" event={"ID":"8e628432-bcdd-424c-823c-b87d45b58936","Type":"ContainerStarted","Data":"bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a"} Feb 19 04:00:01.249122 master-0 kubenswrapper[33867]: I0219 04:00:01.248884 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" podStartSLOduration=1.248866091 podStartE2EDuration="1.248866091s" podCreationTimestamp="2026-02-19 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 04:00:01.242644044 +0000 UTC m=+2206.539314655" watchObservedRunningTime="2026-02-19 04:00:01.248866091 +0000 UTC m=+2206.545536702" Feb 19 04:00:02.241553 master-0 kubenswrapper[33867]: I0219 04:00:02.241459 33867 generic.go:334] "Generic (PLEG): container finished" podID="8e628432-bcdd-424c-823c-b87d45b58936" containerID="b16f5fd15c5dd608b42e101de638319e9500f99346ebaf5506e1d05e69d2efd9" exitCode=0 Feb 19 04:00:02.241553 master-0 kubenswrapper[33867]: I0219 04:00:02.241543 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" event={"ID":"8e628432-bcdd-424c-823c-b87d45b58936","Type":"ContainerDied","Data":"b16f5fd15c5dd608b42e101de638319e9500f99346ebaf5506e1d05e69d2efd9"} Feb 19 04:00:03.793241 master-0 kubenswrapper[33867]: I0219 04:00:03.793154 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:03.872248 master-0 kubenswrapper[33867]: I0219 04:00:03.872158 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume\") pod \"8e628432-bcdd-424c-823c-b87d45b58936\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " Feb 19 04:00:03.872608 master-0 kubenswrapper[33867]: I0219 04:00:03.872403 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume\") pod \"8e628432-bcdd-424c-823c-b87d45b58936\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " Feb 19 04:00:03.872608 master-0 kubenswrapper[33867]: I0219 04:00:03.872562 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98wpf\" (UniqueName: \"kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf\") pod \"8e628432-bcdd-424c-823c-b87d45b58936\" (UID: \"8e628432-bcdd-424c-823c-b87d45b58936\") " Feb 19 04:00:03.875974 master-0 kubenswrapper[33867]: I0219 04:00:03.875921 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e628432-bcdd-424c-823c-b87d45b58936" (UID: "8e628432-bcdd-424c-823c-b87d45b58936"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 04:00:03.876822 master-0 kubenswrapper[33867]: I0219 04:00:03.876774 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e628432-bcdd-424c-823c-b87d45b58936" (UID: "8e628432-bcdd-424c-823c-b87d45b58936"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 04:00:03.878524 master-0 kubenswrapper[33867]: I0219 04:00:03.878477 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf" (OuterVolumeSpecName: "kube-api-access-98wpf") pod "8e628432-bcdd-424c-823c-b87d45b58936" (UID: "8e628432-bcdd-424c-823c-b87d45b58936"). InnerVolumeSpecName "kube-api-access-98wpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 04:00:03.977172 master-0 kubenswrapper[33867]: I0219 04:00:03.977106 33867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e628432-bcdd-424c-823c-b87d45b58936-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 04:00:03.977172 master-0 kubenswrapper[33867]: I0219 04:00:03.977142 33867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e628432-bcdd-424c-823c-b87d45b58936-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 19 04:00:03.977172 master-0 kubenswrapper[33867]: I0219 04:00:03.977153 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98wpf\" (UniqueName: \"kubernetes.io/projected/8e628432-bcdd-424c-823c-b87d45b58936-kube-api-access-98wpf\") on node \"master-0\" DevicePath \"\"" Feb 19 04:00:04.285140 master-0 kubenswrapper[33867]: I0219 04:00:04.285012 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" event={"ID":"8e628432-bcdd-424c-823c-b87d45b58936","Type":"ContainerDied","Data":"bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a"} Feb 19 04:00:04.285140 master-0 kubenswrapper[33867]: I0219 04:00:04.285066 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5a6d3e21dcb8aa3396fc28cabc6a6911bb8497c497b5b09b6207051551b22a" Feb 19 04:00:04.285500 master-0 kubenswrapper[33867]: I0219 04:00:04.285462 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd" Feb 19 04:00:04.362275 master-0 kubenswrapper[33867]: I0219 04:00:04.362191 33867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt"] Feb 19 04:00:04.375394 master-0 kubenswrapper[33867]: I0219 04:00:04.375314 33867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt"] Feb 19 04:00:04.972320 master-0 kubenswrapper[33867]: I0219 04:00:04.972016 33867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08a5432-b9f1-4b15-84c4-df9d6276a414" path="/var/lib/kubelet/pods/e08a5432-b9f1-4b15-84c4-df9d6276a414/volumes" Feb 19 04:00:25.720531 master-0 kubenswrapper[33867]: I0219 04:00:25.720402 33867 scope.go:117] "RemoveContainer" containerID="ca02b8215bf57351b97a8ecbc5b9bfa88dd85ff58f844b1b36f5d8345ce48644" Feb 19 04:01:00.213707 master-0 kubenswrapper[33867]: I0219 04:01:00.213604 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524561-tvfxv"] Feb 19 04:01:00.214507 master-0 kubenswrapper[33867]: E0219 04:01:00.214463 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e628432-bcdd-424c-823c-b87d45b58936" containerName="collect-profiles" Feb 19 04:01:00.214507 master-0 kubenswrapper[33867]: I0219 04:01:00.214500 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e628432-bcdd-424c-823c-b87d45b58936" containerName="collect-profiles" Feb 19 04:01:00.215190 master-0 kubenswrapper[33867]: I0219 04:01:00.215155 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e628432-bcdd-424c-823c-b87d45b58936" containerName="collect-profiles" Feb 19 04:01:00.216666 master-0 kubenswrapper[33867]: I0219 04:01:00.216626 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.227513 master-0 kubenswrapper[33867]: I0219 04:01:00.227367 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524561-tvfxv"] Feb 19 04:01:00.334208 master-0 kubenswrapper[33867]: I0219 04:01:00.334125 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.334825 master-0 kubenswrapper[33867]: I0219 04:01:00.334751 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.335033 master-0 kubenswrapper[33867]: I0219 04:01:00.334969 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.335223 master-0 kubenswrapper[33867]: I0219 04:01:00.335183 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc4rt\" (UniqueName: \"kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.440547 master-0 kubenswrapper[33867]: I0219 04:01:00.440481 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.440806 master-0 kubenswrapper[33867]: I0219 04:01:00.440772 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.440923 master-0 kubenswrapper[33867]: I0219 04:01:00.440886 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.441722 master-0 kubenswrapper[33867]: I0219 04:01:00.441555 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc4rt\" (UniqueName: \"kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.445749 master-0 kubenswrapper[33867]: I0219 04:01:00.445703 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.446237 master-0 kubenswrapper[33867]: I0219 04:01:00.446210 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.447328 master-0 kubenswrapper[33867]: I0219 04:01:00.447298 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.461904 master-0 kubenswrapper[33867]: I0219 04:01:00.461863 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc4rt\" (UniqueName: \"kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt\") pod \"keystone-cron-29524561-tvfxv\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:00.549057 master-0 kubenswrapper[33867]: I0219 04:01:00.548897 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:01.054650 master-0 kubenswrapper[33867]: I0219 04:01:01.053938 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524561-tvfxv"] Feb 19 04:01:01.058940 master-0 kubenswrapper[33867]: W0219 04:01:01.058844 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod662a12e3_dd7a_41ee_b454_24d4ce5e891c.slice/crio-88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af WatchSource:0}: Error finding container 88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af: Status 404 returned error can't find the container with id 88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af Feb 19 04:01:02.080047 master-0 kubenswrapper[33867]: I0219 04:01:02.079971 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524561-tvfxv" event={"ID":"662a12e3-dd7a-41ee-b454-24d4ce5e891c","Type":"ContainerStarted","Data":"fd074a68111e56f113455b2fb0d478ce9ad0bac50278506aee3c7f74e983bfcf"} Feb 19 04:01:02.080047 master-0 kubenswrapper[33867]: I0219 04:01:02.080040 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524561-tvfxv" event={"ID":"662a12e3-dd7a-41ee-b454-24d4ce5e891c","Type":"ContainerStarted","Data":"88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af"} Feb 19 04:01:02.115715 master-0 kubenswrapper[33867]: I0219 04:01:02.115613 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524561-tvfxv" podStartSLOduration=2.115585896 podStartE2EDuration="2.115585896s" podCreationTimestamp="2026-02-19 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 04:01:02.099605601 +0000 UTC m=+2267.396276242" watchObservedRunningTime="2026-02-19 04:01:02.115585896 +0000 UTC m=+2267.412256517" Feb 19 04:01:04.112776 master-0 kubenswrapper[33867]: I0219 04:01:04.112713 33867 generic.go:334] "Generic (PLEG): container finished" podID="662a12e3-dd7a-41ee-b454-24d4ce5e891c" containerID="fd074a68111e56f113455b2fb0d478ce9ad0bac50278506aee3c7f74e983bfcf" exitCode=0 Feb 19 04:01:04.112776 master-0 kubenswrapper[33867]: I0219 04:01:04.112764 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524561-tvfxv" event={"ID":"662a12e3-dd7a-41ee-b454-24d4ce5e891c","Type":"ContainerDied","Data":"fd074a68111e56f113455b2fb0d478ce9ad0bac50278506aee3c7f74e983bfcf"} Feb 19 04:01:05.608916 master-0 kubenswrapper[33867]: I0219 04:01:05.608841 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:05.714202 master-0 kubenswrapper[33867]: I0219 04:01:05.714149 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data\") pod \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " Feb 19 04:01:05.714625 master-0 kubenswrapper[33867]: I0219 04:01:05.714608 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle\") pod \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " Feb 19 04:01:05.714777 master-0 kubenswrapper[33867]: I0219 04:01:05.714765 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys\") pod \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " Feb 19 04:01:05.715847 master-0 kubenswrapper[33867]: I0219 04:01:05.715830 33867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc4rt\" (UniqueName: \"kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt\") pod \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\" (UID: \"662a12e3-dd7a-41ee-b454-24d4ce5e891c\") " Feb 19 04:01:05.719572 master-0 kubenswrapper[33867]: I0219 04:01:05.719533 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "662a12e3-dd7a-41ee-b454-24d4ce5e891c" (UID: "662a12e3-dd7a-41ee-b454-24d4ce5e891c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 04:01:05.720451 master-0 kubenswrapper[33867]: I0219 04:01:05.720421 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt" (OuterVolumeSpecName: "kube-api-access-cc4rt") pod "662a12e3-dd7a-41ee-b454-24d4ce5e891c" (UID: "662a12e3-dd7a-41ee-b454-24d4ce5e891c"). InnerVolumeSpecName "kube-api-access-cc4rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 04:01:05.754450 master-0 kubenswrapper[33867]: I0219 04:01:05.754314 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "662a12e3-dd7a-41ee-b454-24d4ce5e891c" (UID: "662a12e3-dd7a-41ee-b454-24d4ce5e891c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 04:01:05.793123 master-0 kubenswrapper[33867]: I0219 04:01:05.793046 33867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data" (OuterVolumeSpecName: "config-data") pod "662a12e3-dd7a-41ee-b454-24d4ce5e891c" (UID: "662a12e3-dd7a-41ee-b454-24d4ce5e891c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 04:01:05.819632 master-0 kubenswrapper[33867]: I0219 04:01:05.819586 33867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 19 04:01:05.819770 master-0 kubenswrapper[33867]: I0219 04:01:05.819635 33867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 19 04:01:05.819770 master-0 kubenswrapper[33867]: I0219 04:01:05.819650 33867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/662a12e3-dd7a-41ee-b454-24d4ce5e891c-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 19 04:01:05.819770 master-0 kubenswrapper[33867]: I0219 04:01:05.819668 33867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc4rt\" (UniqueName: \"kubernetes.io/projected/662a12e3-dd7a-41ee-b454-24d4ce5e891c-kube-api-access-cc4rt\") on node \"master-0\" DevicePath \"\"" Feb 19 04:01:06.138289 master-0 kubenswrapper[33867]: I0219 04:01:06.138208 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524561-tvfxv" event={"ID":"662a12e3-dd7a-41ee-b454-24d4ce5e891c","Type":"ContainerDied","Data":"88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af"} Feb 19 04:01:06.141533 master-0 kubenswrapper[33867]: I0219 04:01:06.138575 33867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524561-tvfxv" Feb 19 04:01:06.141533 master-0 kubenswrapper[33867]: I0219 04:01:06.139547 33867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88bfdfd0e2f730ecb43dac0e5f6624c60fdd9dfed8d81e4bd10dd177839ac6af" Feb 19 04:06:54.894017 master-0 kubenswrapper[33867]: E0219 04:06:54.893935 33867 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:45496->192.168.32.10:46329: read tcp 192.168.32.10:45496->192.168.32.10:46329: read: connection reset by peer Feb 19 04:07:29.545343 master-0 kubenswrapper[33867]: I0219 04:07:29.545240 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n97ff/must-gather-z7vjc"] Feb 19 04:07:29.546231 master-0 kubenswrapper[33867]: E0219 04:07:29.546083 33867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662a12e3-dd7a-41ee-b454-24d4ce5e891c" containerName="keystone-cron" Feb 19 04:07:29.546231 master-0 kubenswrapper[33867]: I0219 04:07:29.546107 33867 state_mem.go:107] "Deleted CPUSet assignment" podUID="662a12e3-dd7a-41ee-b454-24d4ce5e891c" containerName="keystone-cron" Feb 19 04:07:29.546513 master-0 kubenswrapper[33867]: I0219 04:07:29.546478 33867 memory_manager.go:354] "RemoveStaleState removing state" podUID="662a12e3-dd7a-41ee-b454-24d4ce5e891c" containerName="keystone-cron" Feb 19 04:07:29.548163 master-0 kubenswrapper[33867]: I0219 04:07:29.548111 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.555230 master-0 kubenswrapper[33867]: I0219 04:07:29.552224 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n97ff"/"kube-root-ca.crt" Feb 19 04:07:29.555230 master-0 kubenswrapper[33867]: I0219 04:07:29.552582 33867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n97ff"/"openshift-service-ca.crt" Feb 19 04:07:29.564037 master-0 kubenswrapper[33867]: I0219 04:07:29.563960 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n97ff/must-gather-rhkkk"] Feb 19 04:07:29.569464 master-0 kubenswrapper[33867]: I0219 04:07:29.567395 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.609893 master-0 kubenswrapper[33867]: I0219 04:07:29.608999 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/must-gather-z7vjc"] Feb 19 04:07:29.643292 master-0 kubenswrapper[33867]: I0219 04:07:29.640148 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/must-gather-rhkkk"] Feb 19 04:07:29.712532 master-0 kubenswrapper[33867]: I0219 04:07:29.712436 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47ql\" (UniqueName: \"kubernetes.io/projected/ebbd2899-61e7-4d26-ba19-8e33d697e034-kube-api-access-s47ql\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.712762 master-0 kubenswrapper[33867]: I0219 04:07:29.712595 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjq2p\" (UniqueName: \"kubernetes.io/projected/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-kube-api-access-rjq2p\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.712891 master-0 kubenswrapper[33867]: I0219 04:07:29.712838 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-must-gather-output\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.713029 master-0 kubenswrapper[33867]: I0219 04:07:29.712986 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ebbd2899-61e7-4d26-ba19-8e33d697e034-must-gather-output\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.815302 master-0 kubenswrapper[33867]: I0219 04:07:29.815143 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ebbd2899-61e7-4d26-ba19-8e33d697e034-must-gather-output\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.815513 master-0 kubenswrapper[33867]: I0219 04:07:29.815339 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s47ql\" (UniqueName: \"kubernetes.io/projected/ebbd2899-61e7-4d26-ba19-8e33d697e034-kube-api-access-s47ql\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.815513 master-0 kubenswrapper[33867]: I0219 04:07:29.815460 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjq2p\" (UniqueName: \"kubernetes.io/projected/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-kube-api-access-rjq2p\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.815619 master-0 kubenswrapper[33867]: I0219 04:07:29.815533 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-must-gather-output\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.815671 master-0 kubenswrapper[33867]: I0219 04:07:29.815629 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ebbd2899-61e7-4d26-ba19-8e33d697e034-must-gather-output\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.816090 master-0 kubenswrapper[33867]: I0219 04:07:29.816059 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-must-gather-output\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.840291 master-0 kubenswrapper[33867]: I0219 04:07:29.838982 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjq2p\" (UniqueName: \"kubernetes.io/projected/f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f-kube-api-access-rjq2p\") pod \"must-gather-z7vjc\" (UID: \"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f\") " pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.847287 master-0 kubenswrapper[33867]: I0219 04:07:29.844935 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s47ql\" (UniqueName: \"kubernetes.io/projected/ebbd2899-61e7-4d26-ba19-8e33d697e034-kube-api-access-s47ql\") pod \"must-gather-rhkkk\" (UID: \"ebbd2899-61e7-4d26-ba19-8e33d697e034\") " pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:29.910542 master-0 kubenswrapper[33867]: I0219 04:07:29.910467 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/must-gather-z7vjc" Feb 19 04:07:29.949824 master-0 kubenswrapper[33867]: I0219 04:07:29.949758 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/must-gather-rhkkk" Feb 19 04:07:30.441981 master-0 kubenswrapper[33867]: I0219 04:07:30.441459 33867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 04:07:30.445803 master-0 kubenswrapper[33867]: I0219 04:07:30.445715 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/must-gather-rhkkk"] Feb 19 04:07:30.461961 master-0 kubenswrapper[33867]: I0219 04:07:30.461894 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/must-gather-z7vjc"] Feb 19 04:07:31.454666 master-0 kubenswrapper[33867]: I0219 04:07:31.454611 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-z7vjc" event={"ID":"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f","Type":"ContainerStarted","Data":"c4e23b1def94b361b46329a66cbe49068d3d1f3e0ad0117523bf181a92c0fc3b"} Feb 19 04:07:31.456761 master-0 kubenswrapper[33867]: I0219 04:07:31.456686 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-rhkkk" event={"ID":"ebbd2899-61e7-4d26-ba19-8e33d697e034","Type":"ContainerStarted","Data":"6e94794929ceba86fe5605ca9a4dc1c4f5105a1c53360409550c0698ff0d4e48"} Feb 19 04:07:32.470024 master-0 kubenswrapper[33867]: I0219 04:07:32.469964 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-rhkkk" event={"ID":"ebbd2899-61e7-4d26-ba19-8e33d697e034","Type":"ContainerStarted","Data":"7b30b6fbf843baf312fb1c5a982319c1e1cb2ad8e6b9c65f396f6441c6f7e18b"} Feb 19 04:07:32.470663 master-0 kubenswrapper[33867]: I0219 04:07:32.470031 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-rhkkk" event={"ID":"ebbd2899-61e7-4d26-ba19-8e33d697e034","Type":"ContainerStarted","Data":"519c90bc08e7c9a6192b06b49207fe5b42e95eb839ff20603b3d287a8987da5c"} Feb 19 04:07:32.563799 master-0 kubenswrapper[33867]: I0219 04:07:32.563581 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n97ff/must-gather-rhkkk" podStartSLOduration=2.552774083 podStartE2EDuration="3.563526674s" podCreationTimestamp="2026-02-19 04:07:29 +0000 UTC" firstStartedPulling="2026-02-19 04:07:30.441367056 +0000 UTC m=+2655.738037667" lastFinishedPulling="2026-02-19 04:07:31.452119647 +0000 UTC m=+2656.748790258" observedRunningTime="2026-02-19 04:07:32.54804961 +0000 UTC m=+2657.844720231" watchObservedRunningTime="2026-02-19 04:07:32.563526674 +0000 UTC m=+2657.860197305" Feb 19 04:07:34.118057 master-0 kubenswrapper[33867]: I0219 04:07:34.117964 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-qjgq9_61abb34a-08f0-4438-9a89-c712b2048878/cluster-version-operator/1.log" Feb 19 04:07:34.501786 master-0 kubenswrapper[33867]: I0219 04:07:34.501736 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-qjgq9_61abb34a-08f0-4438-9a89-c712b2048878/cluster-version-operator/2.log" Feb 19 04:07:37.544136 master-0 kubenswrapper[33867]: I0219 04:07:37.543437 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5zg2v_f17a36fe-e4a5-4651-b03e-f4b9741b5ad1/nmstate-console-plugin/0.log" Feb 19 04:07:37.572521 master-0 kubenswrapper[33867]: I0219 04:07:37.572458 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vjzqq_72a71435-3d39-4b6c-9c20-76deaf9da6fe/nmstate-handler/0.log" Feb 19 04:07:37.647321 master-0 kubenswrapper[33867]: I0219 04:07:37.646627 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-fbnqd_04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3/nmstate-metrics/0.log" Feb 19 04:07:37.662576 master-0 kubenswrapper[33867]: I0219 04:07:37.662534 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-fbnqd_04643ffe-ea18-4ce9-b5f2-8c8ee3a649f3/kube-rbac-proxy/0.log" Feb 19 04:07:37.692928 master-0 kubenswrapper[33867]: I0219 04:07:37.692871 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-s4btw_81d21677-453c-479c-a6c2-b7663fd32b72/nmstate-operator/0.log" Feb 19 04:07:37.710803 master-0 kubenswrapper[33867]: I0219 04:07:37.710568 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-47dd4_72878d47-67e4-4070-906c-a3749e8120f9/nmstate-webhook/0.log" Feb 19 04:07:37.752269 master-0 kubenswrapper[33867]: I0219 04:07:37.752137 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/controller/0.log" Feb 19 04:07:37.761425 master-0 kubenswrapper[33867]: I0219 04:07:37.759002 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/kube-rbac-proxy/0.log" Feb 19 04:07:37.797307 master-0 kubenswrapper[33867]: I0219 04:07:37.795711 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/controller/0.log" Feb 19 04:07:38.835458 master-0 kubenswrapper[33867]: I0219 04:07:38.835328 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr/0.log" Feb 19 04:07:38.909217 master-0 kubenswrapper[33867]: I0219 04:07:38.908105 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/reloader/0.log" Feb 19 04:07:38.914816 master-0 kubenswrapper[33867]: I0219 04:07:38.914762 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr-metrics/0.log" Feb 19 04:07:38.930704 master-0 kubenswrapper[33867]: I0219 04:07:38.930630 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy/0.log" Feb 19 04:07:38.944808 master-0 kubenswrapper[33867]: I0219 04:07:38.944740 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy-frr/0.log" Feb 19 04:07:38.949586 master-0 kubenswrapper[33867]: I0219 04:07:38.949545 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-frr-files/0.log" Feb 19 04:07:38.958138 master-0 kubenswrapper[33867]: I0219 04:07:38.958085 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-reloader/0.log" Feb 19 04:07:38.978825 master-0 kubenswrapper[33867]: I0219 04:07:38.978751 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-metrics/0.log" Feb 19 04:07:38.991383 master-0 kubenswrapper[33867]: I0219 04:07:38.991246 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-n7lx6_22564019-4f1e-40cb-a6d2-b6ac86a13ca1/frr-k8s-webhook-server/0.log" Feb 19 04:07:39.023065 master-0 kubenswrapper[33867]: I0219 04:07:39.022942 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d69997cd-bxnmk_48dfa1c5-695c-45aa-aca5-f01672f08790/manager/0.log" Feb 19 04:07:39.036283 master-0 kubenswrapper[33867]: I0219 04:07:39.031488 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-667b5d6768-wjdrc_becd4fad-b917-478c-83bf-0b5d0a6770f3/webhook-server/0.log" Feb 19 04:07:39.361795 master-0 kubenswrapper[33867]: I0219 04:07:39.361699 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/speaker/0.log" Feb 19 04:07:39.368488 master-0 kubenswrapper[33867]: I0219 04:07:39.368430 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/kube-rbac-proxy/0.log" Feb 19 04:07:39.405983 master-0 kubenswrapper[33867]: I0219 04:07:39.405943 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 19 04:07:39.642782 master-0 kubenswrapper[33867]: I0219 04:07:39.642727 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 19 04:07:39.655994 master-0 kubenswrapper[33867]: I0219 04:07:39.655947 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 19 04:07:39.671666 master-0 kubenswrapper[33867]: I0219 04:07:39.671601 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 19 04:07:39.691415 master-0 kubenswrapper[33867]: I0219 04:07:39.691360 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 19 04:07:39.711406 master-0 kubenswrapper[33867]: I0219 04:07:39.711352 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 19 04:07:39.720139 master-0 kubenswrapper[33867]: I0219 04:07:39.720066 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 19 04:07:39.740661 master-0 kubenswrapper[33867]: I0219 04:07:39.740610 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 19 04:07:39.782138 master-0 kubenswrapper[33867]: I0219 04:07:39.782066 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_2561caa0-5f79-496e-8fa7-a9692dca20be/installer/0.log" Feb 19 04:07:39.822203 master-0 kubenswrapper[33867]: I0219 04:07:39.822141 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3/installer/0.log" Feb 19 04:07:40.563701 master-0 kubenswrapper[33867]: I0219 04:07:40.563643 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-cc89c88f8-mm225_ba929f18-b86c-4404-9448-cabb59ddc4cc/oauth-openshift/0.log" Feb 19 04:07:41.065045 master-0 kubenswrapper[33867]: I0219 04:07:41.064970 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-tw8v2_6e244dcb-df20-4a7c-bc0a-14ba63c54a9f/assisted-installer-controller/0.log" Feb 19 04:07:41.730123 master-0 kubenswrapper[33867]: I0219 04:07:41.730068 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/0.log" Feb 19 04:07:41.762760 master-0 kubenswrapper[33867]: I0219 04:07:41.762688 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-cjz9l_b4c6dc8c-32c7-4c29-9ee8-a231d0bc2651/authentication-operator/1.log" Feb 19 04:07:42.619736 master-0 kubenswrapper[33867]: I0219 04:07:42.619238 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-z7vjc" event={"ID":"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f","Type":"ContainerStarted","Data":"17bbb0415b0559ced19cbf39738dfec7071a7af187ef931fa0fcd13d3225ee24"} Feb 19 04:07:42.619736 master-0 kubenswrapper[33867]: I0219 04:07:42.619330 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/must-gather-z7vjc" event={"ID":"f4e810c2-4d9d-4e73-ae8f-dcbb81c9cd3f","Type":"ContainerStarted","Data":"e369e36ce40b3cd0c657c3cc13823494d3e0a57d9e026524b6111a1e90aa9cb8"} Feb 19 04:07:42.641815 master-0 kubenswrapper[33867]: I0219 04:07:42.641719 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n97ff/must-gather-z7vjc" podStartSLOduration=2.185739436 podStartE2EDuration="13.641696242s" podCreationTimestamp="2026-02-19 04:07:29 +0000 UTC" firstStartedPulling="2026-02-19 04:07:30.45719556 +0000 UTC m=+2655.753866181" lastFinishedPulling="2026-02-19 04:07:41.913152376 +0000 UTC m=+2667.209822987" observedRunningTime="2026-02-19 04:07:42.638433528 +0000 UTC m=+2667.935104139" watchObservedRunningTime="2026-02-19 04:07:42.641696242 +0000 UTC m=+2667.938366843" Feb 19 04:07:42.708189 master-0 kubenswrapper[33867]: I0219 04:07:42.708060 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-t6jnq_76470062-ab83-47ed-a669-deeb71996548/router/4.log" Feb 19 04:07:42.711771 master-0 kubenswrapper[33867]: I0219 04:07:42.711689 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-t6jnq_76470062-ab83-47ed-a669-deeb71996548/router/3.log" Feb 19 04:07:43.087438 master-0 kubenswrapper[33867]: I0219 04:07:43.087311 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7"] Feb 19 04:07:43.089520 master-0 kubenswrapper[33867]: I0219 04:07:43.089500 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.104169 master-0 kubenswrapper[33867]: I0219 04:07:43.104096 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7"] Feb 19 04:07:43.227285 master-0 kubenswrapper[33867]: I0219 04:07:43.226607 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-proc\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.227285 master-0 kubenswrapper[33867]: I0219 04:07:43.226736 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5hq\" (UniqueName: \"kubernetes.io/projected/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-kube-api-access-rn5hq\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.227285 master-0 kubenswrapper[33867]: I0219 04:07:43.226762 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-sys\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.227285 master-0 kubenswrapper[33867]: I0219 04:07:43.226819 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-podres\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.227285 master-0 kubenswrapper[33867]: I0219 04:07:43.226867 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-lib-modules\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329175 master-0 kubenswrapper[33867]: I0219 04:07:43.329047 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn5hq\" (UniqueName: \"kubernetes.io/projected/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-kube-api-access-rn5hq\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329175 master-0 kubenswrapper[33867]: I0219 04:07:43.329130 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-sys\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329490 master-0 kubenswrapper[33867]: I0219 04:07:43.329228 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-podres\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329490 master-0 kubenswrapper[33867]: I0219 04:07:43.329317 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-lib-modules\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329490 master-0 kubenswrapper[33867]: I0219 04:07:43.329426 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-sys\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329637 master-0 kubenswrapper[33867]: I0219 04:07:43.329497 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-proc\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329637 master-0 kubenswrapper[33867]: I0219 04:07:43.329506 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-podres\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329637 master-0 kubenswrapper[33867]: I0219 04:07:43.329553 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-lib-modules\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.329822 master-0 kubenswrapper[33867]: I0219 04:07:43.329782 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-proc\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.350022 master-0 kubenswrapper[33867]: I0219 04:07:43.349885 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn5hq\" (UniqueName: \"kubernetes.io/projected/8afbb2bd-df84-4cb4-a380-cce84f4c34d1-kube-api-access-rn5hq\") pod \"perf-node-gather-daemonset-tvdt7\" (UID: \"8afbb2bd-df84-4cb4-a380-cce84f4c34d1\") " pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.405362 master-0 kubenswrapper[33867]: I0219 04:07:43.405303 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:43.628909 master-0 kubenswrapper[33867]: I0219 04:07:43.628855 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/oauth-apiserver/0.log" Feb 19 04:07:43.650616 master-0 kubenswrapper[33867]: I0219 04:07:43.650559 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-85f97c6ffb-qfcnk_ace60ebd-e405-4fd2-96fe-7b16a9e11a07/fix-audit-permissions/0.log" Feb 19 04:07:43.879933 master-0 kubenswrapper[33867]: I0219 04:07:43.879815 33867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7"] Feb 19 04:07:44.412037 master-0 kubenswrapper[33867]: I0219 04:07:44.411980 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-pd8lj_33bb562f-84e7-4fcb-b008-416c09a5ecf0/kube-rbac-proxy/0.log" Feb 19 04:07:44.442239 master-0 kubenswrapper[33867]: I0219 04:07:44.442185 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-pd8lj_33bb562f-84e7-4fcb-b008-416c09a5ecf0/cluster-autoscaler-operator/0.log" Feb 19 04:07:44.456551 master-0 kubenswrapper[33867]: I0219 04:07:44.456498 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/2.log" Feb 19 04:07:44.458562 master-0 kubenswrapper[33867]: I0219 04:07:44.458507 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/cluster-baremetal-operator/3.log" Feb 19 04:07:44.469519 master-0 kubenswrapper[33867]: I0219 04:07:44.469488 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-9vgg7_af5828ea-090f-4c8f-90e6-c4e405e69ec5/baremetal-kube-rbac-proxy/0.log" Feb 19 04:07:44.489028 master-0 kubenswrapper[33867]: I0219 04:07:44.488981 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5_0664d88f-f697-4182-93cd-f208ff6f3ac2/control-plane-machine-set-operator/0.log" Feb 19 04:07:44.490023 master-0 kubenswrapper[33867]: I0219 04:07:44.489993 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5_0664d88f-f697-4182-93cd-f208ff6f3ac2/control-plane-machine-set-operator/1.log" Feb 19 04:07:44.507506 master-0 kubenswrapper[33867]: I0219 04:07:44.507466 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-prbs7_255784ad-b52a-4c5c-ad15-278865ee2ccb/kube-rbac-proxy/0.log" Feb 19 04:07:44.526434 master-0 kubenswrapper[33867]: I0219 04:07:44.526399 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-prbs7_255784ad-b52a-4c5c-ad15-278865ee2ccb/machine-api-operator/0.log" Feb 19 04:07:44.649298 master-0 kubenswrapper[33867]: I0219 04:07:44.649212 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" event={"ID":"8afbb2bd-df84-4cb4-a380-cce84f4c34d1","Type":"ContainerStarted","Data":"5dae3ba674981f55aa35464f661a695b0e0b4b2550900db711570b6d50ff10e0"} Feb 19 04:07:44.649838 master-0 kubenswrapper[33867]: I0219 04:07:44.649813 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" event={"ID":"8afbb2bd-df84-4cb4-a380-cce84f4c34d1","Type":"ContainerStarted","Data":"112ea4aeeb465abd95b79057b9a76a923ca9fd445196d1a0cb2a4bcdac718d07"} Feb 19 04:07:44.649958 master-0 kubenswrapper[33867]: I0219 04:07:44.649940 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:44.674434 master-0 kubenswrapper[33867]: I0219 04:07:44.673558 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" podStartSLOduration=1.673538722 podStartE2EDuration="1.673538722s" podCreationTimestamp="2026-02-19 04:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 04:07:44.668096206 +0000 UTC m=+2669.964766817" watchObservedRunningTime="2026-02-19 04:07:44.673538722 +0000 UTC m=+2669.970209333" Feb 19 04:07:45.754036 master-0 kubenswrapper[33867]: I0219 04:07:45.753968 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/0.log" Feb 19 04:07:45.754623 master-0 kubenswrapper[33867]: I0219 04:07:45.754566 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/cluster-cloud-controller-manager/1.log" Feb 19 04:07:45.771650 master-0 kubenswrapper[33867]: I0219 04:07:45.771603 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/config-sync-controllers/1.log" Feb 19 04:07:45.774788 master-0 kubenswrapper[33867]: I0219 04:07:45.774759 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/config-sync-controllers/0.log" Feb 19 04:07:45.791838 master-0 kubenswrapper[33867]: I0219 04:07:45.791789 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/5.log" Feb 19 04:07:45.791981 master-0 kubenswrapper[33867]: I0219 04:07:45.791956 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_af2be4f9-f632-4a72-8f39-c96954403edc/kube-rbac-proxy/6.log" Feb 19 04:07:46.335921 master-0 kubenswrapper[33867]: I0219 04:07:46.335855 33867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n97ff/master-0-debug-4mmw8"] Feb 19 04:07:46.338562 master-0 kubenswrapper[33867]: I0219 04:07:46.338527 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.418423 master-0 kubenswrapper[33867]: I0219 04:07:46.418323 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-host\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.418635 master-0 kubenswrapper[33867]: I0219 04:07:46.418549 33867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6pn\" (UniqueName: \"kubernetes.io/projected/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-kube-api-access-cg6pn\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.520435 master-0 kubenswrapper[33867]: I0219 04:07:46.520373 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-host\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.520668 master-0 kubenswrapper[33867]: I0219 04:07:46.520547 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-host\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.520668 master-0 kubenswrapper[33867]: I0219 04:07:46.520577 33867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6pn\" (UniqueName: \"kubernetes.io/projected/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-kube-api-access-cg6pn\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.539989 master-0 kubenswrapper[33867]: I0219 04:07:46.539942 33867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6pn\" (UniqueName: \"kubernetes.io/projected/b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd-kube-api-access-cg6pn\") pod \"master-0-debug-4mmw8\" (UID: \"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd\") " pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.657224 master-0 kubenswrapper[33867]: I0219 04:07:46.657168 33867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" Feb 19 04:07:46.683510 master-0 kubenswrapper[33867]: W0219 04:07:46.683449 33867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5af0ae3_3c8c_4dde_a6aa_0dda6b8f7bcd.slice/crio-105196d0c528b1781375427a58fc6db58f968586479712159c187279debe15be WatchSource:0}: Error finding container 105196d0c528b1781375427a58fc6db58f968586479712159c187279debe15be: Status 404 returned error can't find the container with id 105196d0c528b1781375427a58fc6db58f968586479712159c187279debe15be Feb 19 04:07:47.625936 master-0 kubenswrapper[33867]: I0219 04:07:47.625859 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-p2hfn_858a717b-a44e-4b8d-9974-7451a89cf104/kube-rbac-proxy/0.log" Feb 19 04:07:47.657633 master-0 kubenswrapper[33867]: I0219 04:07:47.657564 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-p2hfn_858a717b-a44e-4b8d-9974-7451a89cf104/cloud-credential-operator/0.log" Feb 19 04:07:47.694184 master-0 kubenswrapper[33867]: I0219 04:07:47.694080 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" event={"ID":"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd","Type":"ContainerStarted","Data":"105196d0c528b1781375427a58fc6db58f968586479712159c187279debe15be"} Feb 19 04:07:48.270945 master-0 kubenswrapper[33867]: I0219 04:07:48.270882 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-api-0_da327fb4-7852-4866-bb8f-8b2930854e24/cinder-054a4-api-log/0.log" Feb 19 04:07:48.304977 master-0 kubenswrapper[33867]: I0219 04:07:48.304918 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-api-0_da327fb4-7852-4866-bb8f-8b2930854e24/cinder-api/0.log" Feb 19 04:07:48.377622 master-0 kubenswrapper[33867]: I0219 04:07:48.376820 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-backup-0_00b58cd8-030f-4e5f-9808-edd4e1e31d8f/cinder-backup/0.log" Feb 19 04:07:48.394177 master-0 kubenswrapper[33867]: I0219 04:07:48.394123 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-backup-0_00b58cd8-030f-4e5f-9808-edd4e1e31d8f/probe/0.log" Feb 19 04:07:48.468937 master-0 kubenswrapper[33867]: I0219 04:07:48.468876 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-scheduler-0_70bd69f0-6b7c-44b0-8e7d-27edf886efcf/cinder-scheduler/0.log" Feb 19 04:07:48.505161 master-0 kubenswrapper[33867]: I0219 04:07:48.498319 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-scheduler-0_70bd69f0-6b7c-44b0-8e7d-27edf886efcf/probe/0.log" Feb 19 04:07:48.571902 master-0 kubenswrapper[33867]: I0219 04:07:48.571790 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-volume-lvm-iscsi-0_cd60be62-5e2e-4bee-a46e-a202e42adad9/cinder-volume/0.log" Feb 19 04:07:48.593038 master-0 kubenswrapper[33867]: I0219 04:07:48.592984 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-054a4-volume-lvm-iscsi-0_cd60be62-5e2e-4bee-a46e-a202e42adad9/probe/0.log" Feb 19 04:07:48.607562 master-0 kubenswrapper[33867]: I0219 04:07:48.607513 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7587d49f7f-lcx7j_2d51ba3f-9ce6-49b9-a314-7d212c55ff8e/dnsmasq-dns/0.log" Feb 19 04:07:48.624312 master-0 kubenswrapper[33867]: I0219 04:07:48.624212 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7587d49f7f-lcx7j_2d51ba3f-9ce6-49b9-a314-7d212c55ff8e/init/0.log" Feb 19 04:07:48.705470 master-0 kubenswrapper[33867]: I0219 04:07:48.705430 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-fa7ca-default-external-api-0_115b48b9-768e-4e24-ba50-2d47e507b21b/glance-log/0.log" Feb 19 04:07:48.720542 master-0 kubenswrapper[33867]: I0219 04:07:48.720503 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-fa7ca-default-external-api-0_115b48b9-768e-4e24-ba50-2d47e507b21b/glance-httpd/0.log" Feb 19 04:07:48.818535 master-0 kubenswrapper[33867]: I0219 04:07:48.818484 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-fa7ca-default-internal-api-0_5f80387f-955e-4858-ad6b-fcfe3585e929/glance-log/0.log" Feb 19 04:07:48.835932 master-0 kubenswrapper[33867]: I0219 04:07:48.835766 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-fa7ca-default-internal-api-0_5f80387f-955e-4858-ad6b-fcfe3585e929/glance-httpd/0.log" Feb 19 04:07:48.858864 master-0 kubenswrapper[33867]: I0219 04:07:48.858773 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6ddb5778b6-l9w7m_e3524599-68ae-4932-8b2f-7a5e277ad153/ironic-api-log/0.log" Feb 19 04:07:48.918203 master-0 kubenswrapper[33867]: I0219 04:07:48.918155 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6ddb5778b6-l9w7m_e3524599-68ae-4932-8b2f-7a5e277ad153/ironic-api/0.log" Feb 19 04:07:48.926719 master-0 kubenswrapper[33867]: I0219 04:07:48.926183 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6ddb5778b6-l9w7m_e3524599-68ae-4932-8b2f-7a5e277ad153/init/0.log" Feb 19 04:07:48.953738 master-0 kubenswrapper[33867]: I0219 04:07:48.953686 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/ironic-conductor/0.log" Feb 19 04:07:48.961635 master-0 kubenswrapper[33867]: I0219 04:07:48.961589 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/httpboot/0.log" Feb 19 04:07:48.970496 master-0 kubenswrapper[33867]: I0219 04:07:48.970455 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/dnsmasq/0.log" Feb 19 04:07:48.978282 master-0 kubenswrapper[33867]: I0219 04:07:48.977240 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/init/0.log" Feb 19 04:07:48.985291 master-0 kubenswrapper[33867]: I0219 04:07:48.985264 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/ironic-python-agent-init/0.log" Feb 19 04:07:49.784649 master-0 kubenswrapper[33867]: I0219 04:07:49.784574 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/8.log" Feb 19 04:07:49.797652 master-0 kubenswrapper[33867]: I0219 04:07:49.796983 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-config-operator/9.log" Feb 19 04:07:49.812284 master-0 kubenswrapper[33867]: I0219 04:07:49.812210 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-zn8c7_78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda/openshift-api/0.log" Feb 19 04:07:49.925645 master-0 kubenswrapper[33867]: I0219 04:07:49.925601 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_9c830f8b-3d33-4879-91b9-bd374a1e695b/pxe-init/0.log" Feb 19 04:07:49.993016 master-0 kubenswrapper[33867]: I0219 04:07:49.992970 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/ironic-inspector-httpd/0.log" Feb 19 04:07:50.044916 master-0 kubenswrapper[33867]: I0219 04:07:50.043389 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/ironic-inspector/0.log" Feb 19 04:07:50.054802 master-0 kubenswrapper[33867]: I0219 04:07:50.054753 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/inspector-httpboot/0.log" Feb 19 04:07:50.061025 master-0 kubenswrapper[33867]: I0219 04:07:50.060980 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/ramdisk-logs/0.log" Feb 19 04:07:50.072413 master-0 kubenswrapper[33867]: I0219 04:07:50.072378 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/inspector-dnsmasq/0.log" Feb 19 04:07:50.082574 master-0 kubenswrapper[33867]: I0219 04:07:50.082523 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/ironic-python-agent-init/0.log" Feb 19 04:07:50.099863 master-0 kubenswrapper[33867]: I0219 04:07:50.099026 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_bf69a2c3-1eac-4d93-9e5c-7d0d1ba34868/inspector-pxe-init/0.log" Feb 19 04:07:50.110062 master-0 kubenswrapper[33867]: I0219 04:07:50.110012 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-64cdd9cf48-dg7ws_6a7f405f-ed33-4311-84a9-6aaf1fd4dadb/ironic-neutron-agent/2.log" Feb 19 04:07:50.111448 master-0 kubenswrapper[33867]: I0219 04:07:50.111412 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-64cdd9cf48-dg7ws_6a7f405f-ed33-4311-84a9-6aaf1fd4dadb/ironic-neutron-agent/1.log" Feb 19 04:07:50.187847 master-0 kubenswrapper[33867]: I0219 04:07:50.187810 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-858d748b68-dmpbz_26a4d640-b07f-4b27-91e2-bc4449a4213c/keystone-api/0.log" Feb 19 04:07:50.195714 master-0 kubenswrapper[33867]: I0219 04:07:50.195676 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524561-tvfxv_662a12e3-dd7a-41ee-b454-24d4ce5e891c/keystone-cron/0.log" Feb 19 04:07:50.899278 master-0 kubenswrapper[33867]: I0219 04:07:50.898394 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-rb2hx_5f7e6789-3b0b-4117-9d25-55a671e42f93/console-operator/0.log" Feb 19 04:07:51.861196 master-0 kubenswrapper[33867]: I0219 04:07:51.861136 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fb999cb7-wzrtl_0c4eb386-9996-4d66-affc-b9a55882cc66/console/0.log" Feb 19 04:07:51.913297 master-0 kubenswrapper[33867]: I0219 04:07:51.911627 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-955b69498-bdf7d_6505205d-23d4-4c99-83ac-e82d298a2805/download-server/0.log" Feb 19 04:07:52.968928 master-0 kubenswrapper[33867]: I0219 04:07:52.968859 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-dnfs9_494087b2-b532-4c62-89d5-b88a152fa5db/cluster-storage-operator/0.log" Feb 19 04:07:52.989009 master-0 kubenswrapper[33867]: I0219 04:07:52.988959 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/5.log" Feb 19 04:07:52.989143 master-0 kubenswrapper[33867]: I0219 04:07:52.989113 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-6trsd_c8f325fb-0075-4a18-ba7e-669ab19bc91a/snapshot-controller/6.log" Feb 19 04:07:53.014356 master-0 kubenswrapper[33867]: I0219 04:07:53.014298 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-mtqxj_d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/csi-snapshot-controller-operator/0.log" Feb 19 04:07:53.031926 master-0 kubenswrapper[33867]: I0219 04:07:53.031896 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-mtqxj_d6fae256-6a2e-45e7-8f2f-d471f46ad3b2/csi-snapshot-controller-operator/1.log" Feb 19 04:07:53.455354 master-0 kubenswrapper[33867]: I0219 04:07:53.453852 33867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-n97ff/perf-node-gather-daemonset-tvdt7" Feb 19 04:07:53.913399 master-0 kubenswrapper[33867]: I0219 04:07:53.913351 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-jlnvw_67f4e002-26fb-41e3-abdb-f4928b6c561f/dns-operator/0.log" Feb 19 04:07:53.922799 master-0 kubenswrapper[33867]: I0219 04:07:53.922588 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-jlnvw_67f4e002-26fb-41e3-abdb-f4928b6c561f/kube-rbac-proxy/0.log" Feb 19 04:07:54.901785 master-0 kubenswrapper[33867]: I0219 04:07:54.901734 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-clndn_75c58162-a0ba-40f4-8894-38f17dc2fb6d/dns/0.log" Feb 19 04:07:54.925130 master-0 kubenswrapper[33867]: I0219 04:07:54.925090 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-clndn_75c58162-a0ba-40f4-8894-38f17dc2fb6d/kube-rbac-proxy/0.log" Feb 19 04:07:54.944225 master-0 kubenswrapper[33867]: I0219 04:07:54.944176 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-4qvfn_67624ad2-babb-4b0e-9599-99325c286b22/dns-node-resolver/0.log" Feb 19 04:07:55.810118 master-0 kubenswrapper[33867]: I0219 04:07:55.810043 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/6.log" Feb 19 04:07:55.840647 master-0 kubenswrapper[33867]: I0219 04:07:55.840587 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-r7r6p_4c3267e5-390a-40a3-bff8-1d1d81fb9a17/etcd-operator/7.log" Feb 19 04:07:56.634660 master-0 kubenswrapper[33867]: I0219 04:07:56.634621 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 19 04:07:56.944276 master-0 kubenswrapper[33867]: I0219 04:07:56.941716 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 19 04:07:56.962420 master-0 kubenswrapper[33867]: I0219 04:07:56.962369 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 19 04:07:56.975934 master-0 kubenswrapper[33867]: I0219 04:07:56.975799 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 19 04:07:56.989088 master-0 kubenswrapper[33867]: I0219 04:07:56.989007 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 19 04:07:57.002605 master-0 kubenswrapper[33867]: I0219 04:07:57.002513 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 19 04:07:57.019333 master-0 kubenswrapper[33867]: I0219 04:07:57.019293 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 19 04:07:57.032119 master-0 kubenswrapper[33867]: I0219 04:07:57.032059 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 19 04:07:57.087339 master-0 kubenswrapper[33867]: I0219 04:07:57.087288 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_2561caa0-5f79-496e-8fa7-a9692dca20be/installer/0.log" Feb 19 04:07:57.172692 master-0 kubenswrapper[33867]: I0219 04:07:57.172404 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_60ce7e75-5190-49a1-b1b7-b3adf0bdf2e3/installer/0.log" Feb 19 04:07:58.184519 master-0 kubenswrapper[33867]: I0219 04:07:58.184457 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-cfdqh_a59746bb-7d76-4fd7-8323-5b92be63afb9/cluster-image-registry-operator/1.log" Feb 19 04:07:58.204895 master-0 kubenswrapper[33867]: I0219 04:07:58.204828 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-cfdqh_a59746bb-7d76-4fd7-8323-5b92be63afb9/cluster-image-registry-operator/2.log" Feb 19 04:07:58.219057 master-0 kubenswrapper[33867]: I0219 04:07:58.219020 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-zkwlh_0cd2ce90-1a60-499b-86d6-7662ce03af65/node-ca/0.log" Feb 19 04:07:59.394657 master-0 kubenswrapper[33867]: I0219 04:07:59.392619 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/4.log" Feb 19 04:07:59.408835 master-0 kubenswrapper[33867]: I0219 04:07:59.408783 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/ingress-operator/5.log" Feb 19 04:07:59.425752 master-0 kubenswrapper[33867]: I0219 04:07:59.425621 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-qcd49_9ff96ce8-6427-4a42-afa6-8b8bc778f094/kube-rbac-proxy/0.log" Feb 19 04:08:00.268002 master-0 kubenswrapper[33867]: I0219 04:08:00.267952 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-bbwkg_a676c43c-4e0a-4826-86c1-288260611b09/serve-healthcheck-canary/0.log" Feb 19 04:08:00.414666 master-0 kubenswrapper[33867]: I0219 04:08:00.414615 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb7d7589-8708-4f52-8e83-f9a47aeb438a/memcached/0.log" Feb 19 04:08:00.585780 master-0 kubenswrapper[33867]: I0219 04:08:00.585664 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-747c56bd5-sdd55_b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d/neutron-api/0.log" Feb 19 04:08:00.606820 master-0 kubenswrapper[33867]: I0219 04:08:00.601039 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-747c56bd5-sdd55_b04636a1-ffe6-4157-b2d5-01f9ca6f3c5d/neutron-httpd/0.log" Feb 19 04:08:00.697239 master-0 kubenswrapper[33867]: I0219 04:08:00.697045 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_40213efd-1773-4c03-a61c-869bd88ccd6f/nova-api-log/0.log" Feb 19 04:08:00.874515 master-0 kubenswrapper[33867]: I0219 04:08:00.872890 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_40213efd-1773-4c03-a61c-869bd88ccd6f/nova-api-api/0.log" Feb 19 04:08:00.975575 master-0 kubenswrapper[33867]: I0219 04:08:00.975517 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5f8f8802-8e26-45eb-aef9-8599459686af/nova-cell0-conductor-conductor/0.log" Feb 19 04:08:01.067395 master-0 kubenswrapper[33867]: I0219 04:08:01.067330 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_23d36214-70ab-4c0a-837d-5a5585b130ac/nova-cell1-compute-ironic-compute-compute/0.log" Feb 19 04:08:01.164506 master-0 kubenswrapper[33867]: I0219 04:08:01.162276 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-2dvkr_5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/insights-operator/1.log" Feb 19 04:08:01.164506 master-0 kubenswrapper[33867]: I0219 04:08:01.162407 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-2dvkr_5e75f0c1-7a52-4ad6-9b0d-b34ca87c3aa4/insights-operator/0.log" Feb 19 04:08:01.197791 master-0 kubenswrapper[33867]: I0219 04:08:01.197746 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bfd1b147-452f-48ca-b3cb-5239ffabec00/nova-cell1-conductor-conductor/0.log" Feb 19 04:08:01.261928 master-0 kubenswrapper[33867]: I0219 04:08:01.261868 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7e06e99e-0862-48e2-b640-8fd02ed338dd/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 04:08:01.350424 master-0 kubenswrapper[33867]: I0219 04:08:01.349857 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fd8c008b-b321-46e8-9c93-6793dd4e084c/nova-metadata-log/0.log" Feb 19 04:08:01.875634 master-0 kubenswrapper[33867]: I0219 04:08:01.874690 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fd8c008b-b321-46e8-9c93-6793dd4e084c/nova-metadata-metadata/0.log" Feb 19 04:08:01.966096 master-0 kubenswrapper[33867]: I0219 04:08:01.966048 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c6c61cb6-e0c9-4f3a-96e4-b220c4998ddd/nova-scheduler-scheduler/0.log" Feb 19 04:08:01.996138 master-0 kubenswrapper[33867]: I0219 04:08:01.995220 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d2176305-52ee-4689-a5f6-1aea00a75d4f/galera/0.log" Feb 19 04:08:02.007343 master-0 kubenswrapper[33867]: I0219 04:08:02.005820 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d2176305-52ee-4689-a5f6-1aea00a75d4f/mysql-bootstrap/0.log" Feb 19 04:08:02.027490 master-0 kubenswrapper[33867]: I0219 04:08:02.027440 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1/galera/0.log" Feb 19 04:08:02.038102 master-0 kubenswrapper[33867]: I0219 04:08:02.038081 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9c1e5d2e-cdbb-47e7-afc4-a780fb7c41c1/mysql-bootstrap/0.log" Feb 19 04:08:02.044467 master-0 kubenswrapper[33867]: I0219 04:08:02.044281 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0297a953-f1ca-434c-a52b-bd94277921f3/openstackclient/0.log" Feb 19 04:08:02.069302 master-0 kubenswrapper[33867]: I0219 04:08:02.069266 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96jnp_c0e34f5c-1fd8-4f4c-92f7-9bceded5aefd/ovn-controller/0.log" Feb 19 04:08:02.080474 master-0 kubenswrapper[33867]: I0219 04:08:02.080410 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ghz27_bbab2bac-a2eb-4080-b1d7-bf9eb49dde8e/openstack-network-exporter/0.log" Feb 19 04:08:02.092458 master-0 kubenswrapper[33867]: I0219 04:08:02.092426 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfn5s_8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0/ovsdb-server/0.log" Feb 19 04:08:02.106921 master-0 kubenswrapper[33867]: I0219 04:08:02.105289 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfn5s_8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0/ovs-vswitchd/0.log" Feb 19 04:08:02.110041 master-0 kubenswrapper[33867]: I0219 04:08:02.109983 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfn5s_8cbb39c7-0295-4f1e-99d1-d9bea8ea45a0/ovsdb-server-init/0.log" Feb 19 04:08:02.127869 master-0 kubenswrapper[33867]: I0219 04:08:02.127820 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ad8bcfb7-310e-45ca-96a7-e12671866348/ovn-northd/0.log" Feb 19 04:08:02.138435 master-0 kubenswrapper[33867]: I0219 04:08:02.138379 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ad8bcfb7-310e-45ca-96a7-e12671866348/openstack-network-exporter/0.log" Feb 19 04:08:02.156522 master-0 kubenswrapper[33867]: I0219 04:08:02.155130 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79fcdad5-1265-4636-af92-ede5356e0f6a/ovsdbserver-nb/0.log" Feb 19 04:08:02.162973 master-0 kubenswrapper[33867]: I0219 04:08:02.162926 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79fcdad5-1265-4636-af92-ede5356e0f6a/openstack-network-exporter/0.log" Feb 19 04:08:02.185280 master-0 kubenswrapper[33867]: I0219 04:08:02.185041 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_467115dc-5bd5-496c-87cb-a0c278e45a72/ovsdbserver-sb/0.log" Feb 19 04:08:02.201069 master-0 kubenswrapper[33867]: I0219 04:08:02.201007 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_467115dc-5bd5-496c-87cb-a0c278e45a72/openstack-network-exporter/0.log" Feb 19 04:08:02.256080 master-0 kubenswrapper[33867]: I0219 04:08:02.256027 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-659db66d4-26vz9_ef247635-c161-4402-b9f0-6b9e4e9bc42b/placement-log/0.log" Feb 19 04:08:02.287246 master-0 kubenswrapper[33867]: I0219 04:08:02.287210 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-659db66d4-26vz9_ef247635-c161-4402-b9f0-6b9e4e9bc42b/placement-api/0.log" Feb 19 04:08:02.308847 master-0 kubenswrapper[33867]: I0219 04:08:02.308745 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d16fae78-0a83-4085-a9b5-896938c7d1b3/rabbitmq/0.log" Feb 19 04:08:02.323595 master-0 kubenswrapper[33867]: I0219 04:08:02.323552 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d16fae78-0a83-4085-a9b5-896938c7d1b3/setup-container/0.log" Feb 19 04:08:02.364338 master-0 kubenswrapper[33867]: I0219 04:08:02.364290 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9e764204-85e6-4bcf-bdd4-6c24e78d4e3b/rabbitmq/0.log" Feb 19 04:08:02.388005 master-0 kubenswrapper[33867]: I0219 04:08:02.387963 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9e764204-85e6-4bcf-bdd4-6c24e78d4e3b/setup-container/0.log" Feb 19 04:08:02.455449 master-0 kubenswrapper[33867]: I0219 04:08:02.455405 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b57897cc4-nd9ff_810cab61-d654-4926-a83f-51af67acafd0/proxy-httpd/0.log" Feb 19 04:08:02.469097 master-0 kubenswrapper[33867]: I0219 04:08:02.469054 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b57897cc4-nd9ff_810cab61-d654-4926-a83f-51af67acafd0/proxy-server/0.log" Feb 19 04:08:02.479428 master-0 kubenswrapper[33867]: I0219 04:08:02.477953 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-xnwxz_6bdc624f-2b02-4f65-93e7-49b26b1da384/swift-ring-rebalance/0.log" Feb 19 04:08:02.505482 master-0 kubenswrapper[33867]: I0219 04:08:02.505450 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/account-server/0.log" Feb 19 04:08:02.522335 master-0 kubenswrapper[33867]: I0219 04:08:02.522203 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/account-replicator/0.log" Feb 19 04:08:02.527502 master-0 kubenswrapper[33867]: I0219 04:08:02.527467 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/account-auditor/0.log" Feb 19 04:08:02.537475 master-0 kubenswrapper[33867]: I0219 04:08:02.537434 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/account-reaper/0.log" Feb 19 04:08:02.545078 master-0 kubenswrapper[33867]: I0219 04:08:02.545047 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/container-server/0.log" Feb 19 04:08:02.560623 master-0 kubenswrapper[33867]: I0219 04:08:02.560525 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/container-replicator/0.log" Feb 19 04:08:02.581773 master-0 kubenswrapper[33867]: I0219 04:08:02.581702 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/container-auditor/0.log" Feb 19 04:08:02.591886 master-0 kubenswrapper[33867]: I0219 04:08:02.591836 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/container-updater/0.log" Feb 19 04:08:02.612211 master-0 kubenswrapper[33867]: I0219 04:08:02.612101 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/object-server/0.log" Feb 19 04:08:02.629540 master-0 kubenswrapper[33867]: I0219 04:08:02.629485 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/object-replicator/0.log" Feb 19 04:08:02.644130 master-0 kubenswrapper[33867]: I0219 04:08:02.644085 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/object-auditor/0.log" Feb 19 04:08:02.663162 master-0 kubenswrapper[33867]: I0219 04:08:02.663126 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/object-updater/0.log" Feb 19 04:08:02.698389 master-0 kubenswrapper[33867]: I0219 04:08:02.698342 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/object-expirer/0.log" Feb 19 04:08:02.751300 master-0 kubenswrapper[33867]: I0219 04:08:02.751244 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/rsync/0.log" Feb 19 04:08:02.760613 master-0 kubenswrapper[33867]: I0219 04:08:02.760589 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aea865d8-841e-4326-9833-ee28b81c18e1/swift-recon-cron/0.log" Feb 19 04:08:02.882220 master-0 kubenswrapper[33867]: I0219 04:08:02.882179 33867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" event={"ID":"b5af0ae3-3c8c-4dde-a6aa-0dda6b8f7bcd","Type":"ContainerStarted","Data":"1470753135fd3fbac83f71a9f18527990077b6480027a4c09f0dbed113292d47"} Feb 19 04:08:02.914467 master-0 kubenswrapper[33867]: I0219 04:08:02.914205 33867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n97ff/master-0-debug-4mmw8" podStartSLOduration=1.789781775 podStartE2EDuration="16.914187806s" podCreationTimestamp="2026-02-19 04:07:46 +0000 UTC" firstStartedPulling="2026-02-19 04:07:46.689092625 +0000 UTC m=+2671.985763256" lastFinishedPulling="2026-02-19 04:08:01.813498676 +0000 UTC m=+2687.110169287" observedRunningTime="2026-02-19 04:08:02.905976 +0000 UTC m=+2688.202646611" watchObservedRunningTime="2026-02-19 04:08:02.914187806 +0000 UTC m=+2688.210858417" Feb 19 04:08:03.502802 master-0 kubenswrapper[33867]: I0219 04:08:03.502709 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/alertmanager/0.log" Feb 19 04:08:03.518821 master-0 kubenswrapper[33867]: I0219 04:08:03.518790 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/config-reloader/0.log" Feb 19 04:08:03.533738 master-0 kubenswrapper[33867]: I0219 04:08:03.533677 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/kube-rbac-proxy-web/0.log" Feb 19 04:08:03.543941 master-0 kubenswrapper[33867]: I0219 04:08:03.543908 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/kube-rbac-proxy/0.log" Feb 19 04:08:03.558662 master-0 kubenswrapper[33867]: I0219 04:08:03.558612 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/kube-rbac-proxy-metric/0.log" Feb 19 04:08:03.570640 master-0 kubenswrapper[33867]: I0219 04:08:03.570592 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/prom-label-proxy/0.log" Feb 19 04:08:03.585548 master-0 kubenswrapper[33867]: I0219 04:08:03.585441 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_f575aff7-687b-4fd9-8d50-22cee2314277/init-config-reloader/0.log" Feb 19 04:08:03.680315 master-0 kubenswrapper[33867]: I0219 04:08:03.680267 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6bb6d78bf-2vmxq_80c48134-cb22-4cf9-b076-ce39af2f4113/cluster-monitoring-operator/0.log" Feb 19 04:08:03.703829 master-0 kubenswrapper[33867]: I0219 04:08:03.703738 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-m7mdb_ec677f3d-06c4-4cf4-9f24-69894b9a9118/kube-state-metrics/0.log" Feb 19 04:08:03.722135 master-0 kubenswrapper[33867]: I0219 04:08:03.722072 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-m7mdb_ec677f3d-06c4-4cf4-9f24-69894b9a9118/kube-rbac-proxy-main/0.log" Feb 19 04:08:03.743854 master-0 kubenswrapper[33867]: I0219 04:08:03.743792 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-m7mdb_ec677f3d-06c4-4cf4-9f24-69894b9a9118/kube-rbac-proxy-self/0.log" Feb 19 04:08:03.760414 master-0 kubenswrapper[33867]: I0219 04:08:03.760387 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-66b5846d67-vlng5_50074e69-cff8-46dc-bd2b-e3dd2f696a9d/metrics-server/0.log" Feb 19 04:08:03.778856 master-0 kubenswrapper[33867]: I0219 04:08:03.778822 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-84ff5d7bd8-cdwlm_1c2c9876-4b0b-429d-a3bb-339b1c0bfc75/monitoring-plugin/0.log" Feb 19 04:08:03.794637 master-0 kubenswrapper[33867]: I0219 04:08:03.794598 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8g26m_8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/node-exporter/0.log" Feb 19 04:08:03.806578 master-0 kubenswrapper[33867]: I0219 04:08:03.806529 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8g26m_8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/kube-rbac-proxy/0.log" Feb 19 04:08:03.820083 master-0 kubenswrapper[33867]: I0219 04:08:03.820050 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8g26m_8ec16b3a-5d5c-46fe-87f0-89f93a2775ed/init-textfile/0.log" Feb 19 04:08:03.837710 master-0 kubenswrapper[33867]: I0219 04:08:03.837652 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-4ccjj_43560ec3-3526-40e1-aeb7-e3137a99171d/kube-rbac-proxy-main/0.log" Feb 19 04:08:03.855879 master-0 kubenswrapper[33867]: I0219 04:08:03.855771 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-4ccjj_43560ec3-3526-40e1-aeb7-e3137a99171d/kube-rbac-proxy-self/0.log" Feb 19 04:08:03.879061 master-0 kubenswrapper[33867]: I0219 04:08:03.879014 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-4ccjj_43560ec3-3526-40e1-aeb7-e3137a99171d/openshift-state-metrics/0.log" Feb 19 04:08:03.922475 master-0 kubenswrapper[33867]: I0219 04:08:03.922436 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/prometheus/0.log" Feb 19 04:08:03.939142 master-0 kubenswrapper[33867]: I0219 04:08:03.939106 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/config-reloader/0.log" Feb 19 04:08:03.952109 master-0 kubenswrapper[33867]: I0219 04:08:03.952079 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/thanos-sidecar/0.log" Feb 19 04:08:03.965657 master-0 kubenswrapper[33867]: I0219 04:08:03.965568 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/kube-rbac-proxy-web/0.log" Feb 19 04:08:03.993947 master-0 kubenswrapper[33867]: I0219 04:08:03.993885 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/kube-rbac-proxy/0.log" Feb 19 04:08:04.010669 master-0 kubenswrapper[33867]: I0219 04:08:04.010582 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/kube-rbac-proxy-thanos/0.log" Feb 19 04:08:04.026786 master-0 kubenswrapper[33867]: I0219 04:08:04.026695 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_9b569743-a475-4bd4-aba2-c4d14f8b82f0/init-config-reloader/0.log" Feb 19 04:08:04.055731 master-0 kubenswrapper[33867]: I0219 04:08:04.055694 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-tkbxr_e2e81865-21fa-4e35-a870-738c13ac5b70/prometheus-operator/0.log" Feb 19 04:08:04.067151 master-0 kubenswrapper[33867]: I0219 04:08:04.067105 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-tkbxr_e2e81865-21fa-4e35-a870-738c13ac5b70/kube-rbac-proxy/0.log" Feb 19 04:08:04.086407 master-0 kubenswrapper[33867]: I0219 04:08:04.086194 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-75d56db95f-4ms92_ed2b5ced-d986-4622-9e0a-d39363629408/prometheus-operator-admission-webhook/0.log" Feb 19 04:08:04.110062 master-0 kubenswrapper[33867]: I0219 04:08:04.109954 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/1.log" Feb 19 04:08:04.110288 master-0 kubenswrapper[33867]: I0219 04:08:04.110267 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/telemeter-client/2.log" Feb 19 04:08:04.133208 master-0 kubenswrapper[33867]: I0219 04:08:04.133170 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/reload/0.log" Feb 19 04:08:04.147219 master-0 kubenswrapper[33867]: I0219 04:08:04.146460 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6df4d685bd-g7b8m_943c09ec-a2d2-40df-bbdc-351a30b33d79/kube-rbac-proxy/0.log" Feb 19 04:08:04.167963 master-0 kubenswrapper[33867]: I0219 04:08:04.167914 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/thanos-query/0.log" Feb 19 04:08:04.182886 master-0 kubenswrapper[33867]: I0219 04:08:04.182841 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/kube-rbac-proxy-web/0.log" Feb 19 04:08:04.194320 master-0 kubenswrapper[33867]: I0219 04:08:04.194280 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/kube-rbac-proxy/0.log" Feb 19 04:08:04.205980 master-0 kubenswrapper[33867]: I0219 04:08:04.205939 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/prom-label-proxy/0.log" Feb 19 04:08:04.218329 master-0 kubenswrapper[33867]: I0219 04:08:04.218191 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/kube-rbac-proxy-rules/0.log" Feb 19 04:08:04.231712 master-0 kubenswrapper[33867]: I0219 04:08:04.231676 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-c565b98d-x497s_848b658f-4754-4f9e-b017-b8655e26679d/kube-rbac-proxy-metrics/0.log" Feb 19 04:08:06.470629 master-0 kubenswrapper[33867]: I0219 04:08:06.470527 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/controller/0.log" Feb 19 04:08:06.485150 master-0 kubenswrapper[33867]: I0219 04:08:06.485096 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/kube-rbac-proxy/0.log" Feb 19 04:08:06.506542 master-0 kubenswrapper[33867]: I0219 04:08:06.506502 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/controller/0.log" Feb 19 04:08:07.839285 master-0 kubenswrapper[33867]: I0219 04:08:07.839207 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr/0.log" Feb 19 04:08:07.853198 master-0 kubenswrapper[33867]: I0219 04:08:07.853144 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/reloader/0.log" Feb 19 04:08:07.861709 master-0 kubenswrapper[33867]: I0219 04:08:07.861668 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr-metrics/0.log" Feb 19 04:08:07.874605 master-0 kubenswrapper[33867]: I0219 04:08:07.874552 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy/0.log" Feb 19 04:08:07.891157 master-0 kubenswrapper[33867]: I0219 04:08:07.891115 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy-frr/0.log" Feb 19 04:08:07.903098 master-0 kubenswrapper[33867]: I0219 04:08:07.903051 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-frr-files/0.log" Feb 19 04:08:07.914514 master-0 kubenswrapper[33867]: I0219 04:08:07.914469 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-reloader/0.log" Feb 19 04:08:07.927023 master-0 kubenswrapper[33867]: I0219 04:08:07.926842 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-metrics/0.log" Feb 19 04:08:07.944993 master-0 kubenswrapper[33867]: I0219 04:08:07.944940 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-n7lx6_22564019-4f1e-40cb-a6d2-b6ac86a13ca1/frr-k8s-webhook-server/0.log" Feb 19 04:08:07.971336 master-0 kubenswrapper[33867]: I0219 04:08:07.971281 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d69997cd-bxnmk_48dfa1c5-695c-45aa-aca5-f01672f08790/manager/0.log" Feb 19 04:08:07.988530 master-0 kubenswrapper[33867]: I0219 04:08:07.988473 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-667b5d6768-wjdrc_becd4fad-b917-478c-83bf-0b5d0a6770f3/webhook-server/0.log" Feb 19 04:08:08.395495 master-0 kubenswrapper[33867]: I0219 04:08:08.395426 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/speaker/0.log" Feb 19 04:08:08.408330 master-0 kubenswrapper[33867]: I0219 04:08:08.407996 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/kube-rbac-proxy/0.log" Feb 19 04:08:09.373200 master-0 kubenswrapper[33867]: I0219 04:08:09.373137 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m_8ac7934e-8e29-421c-bf84-6a24044ec1d2/extract/0.log" Feb 19 04:08:09.383391 master-0 kubenswrapper[33867]: I0219 04:08:09.383324 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m_8ac7934e-8e29-421c-bf84-6a24044ec1d2/util/0.log" Feb 19 04:08:09.390246 master-0 kubenswrapper[33867]: I0219 04:08:09.390214 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m_8ac7934e-8e29-421c-bf84-6a24044ec1d2/pull/0.log" Feb 19 04:08:10.740109 master-0 kubenswrapper[33867]: I0219 04:08:10.738056 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-dcpwb_2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/cluster-node-tuning-operator/0.log" Feb 19 04:08:10.741563 master-0 kubenswrapper[33867]: I0219 04:08:10.740819 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-dcpwb_2acaff2d-b9d0-4ed5-9de3-48029eaa8ce5/cluster-node-tuning-operator/1.log" Feb 19 04:08:10.766864 master-0 kubenswrapper[33867]: I0219 04:08:10.766800 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-4jl4c_78702d1c-b5ab-4e00-92da-cb2513a72024/tuned/0.log" Feb 19 04:08:12.391597 master-0 kubenswrapper[33867]: I0219 04:08:12.391531 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-k6f69_81f513e3-9d43-4ca5-a960-a057b6284bf8/manager/0.log" Feb 19 04:08:12.420039 master-0 kubenswrapper[33867]: I0219 04:08:12.419993 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/5.log" Feb 19 04:08:12.459909 master-0 kubenswrapper[33867]: I0219 04:08:12.459852 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-lbfvq_4714ef51-2d24-4938-8c58-80c1485a368b/kube-apiserver-operator/6.log" Feb 19 04:08:13.022292 master-0 kubenswrapper[33867]: I0219 04:08:13.021131 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/controller/0.log" Feb 19 04:08:13.033677 master-0 kubenswrapper[33867]: I0219 04:08:13.033615 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mn6gp_c002fdf0-badd-4f0d-b300-460fb9a65d89/kube-rbac-proxy/0.log" Feb 19 04:08:13.064130 master-0 kubenswrapper[33867]: I0219 04:08:13.064080 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-thsdk_af7e58f8-89c5-400f-b73c-5eb73727e8c7/manager/0.log" Feb 19 04:08:13.073651 master-0 kubenswrapper[33867]: I0219 04:08:13.073612 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-fwz4m_26bcada5-2616-4d6f-82d6-0659611454af/manager/0.log" Feb 19 04:08:13.083124 master-0 kubenswrapper[33867]: I0219 04:08:13.083092 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/controller/0.log" Feb 19 04:08:13.203726 master-0 kubenswrapper[33867]: I0219 04:08:13.203683 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-tp2t2_1f9c99f7-4fe4-4fdf-989d-f17588d7ffe3/manager/0.log" Feb 19 04:08:13.228089 master-0 kubenswrapper[33867]: I0219 04:08:13.227110 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-rpb8v_4d354ad0-8588-4913-8189-ad94abd86af5/manager/0.log" Feb 19 04:08:13.241226 master-0 kubenswrapper[33867]: I0219 04:08:13.240845 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-t8q5h_1dbd1105-8bb5-4010-9ec9-58c2dd1f35e9/manager/0.log" Feb 19 04:08:13.578385 master-0 kubenswrapper[33867]: I0219 04:08:13.578339 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5f879c76b6-nzsnk_1554c3da-f309-402e-8d61-c12b1ef616bf/manager/0.log" Feb 19 04:08:13.611733 master-0 kubenswrapper[33867]: I0219 04:08:13.611678 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1bddb3a1-41bd-4314-bfb0-3c72ca14200f/installer/0.log" Feb 19 04:08:13.641564 master-0 kubenswrapper[33867]: I0219 04:08:13.641497 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_3fab5bbd-672c-4e18-9c1e-438e2360bc54/installer/0.log" Feb 19 04:08:13.672305 master-0 kubenswrapper[33867]: I0219 04:08:13.672271 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-7-master-0_a7adce7b-f079-455e-8377-84c40cfc2557/installer/0.log" Feb 19 04:08:13.687283 master-0 kubenswrapper[33867]: I0219 04:08:13.687053 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-trv7d_aa4296cf-041c-4133-a2d9-8a0becd98502/manager/0.log" Feb 19 04:08:13.814048 master-0 kubenswrapper[33867]: I0219 04:08:13.813986 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8wkzz_5e2af2a9-057f-42b0-aed1-5473728c4a6d/manager/0.log" Feb 19 04:08:13.828941 master-0 kubenswrapper[33867]: I0219 04:08:13.827290 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-vs4pj_146322c9-d8f1-4aa5-af40-313a3226f9f0/manager/0.log" Feb 19 04:08:13.879999 master-0 kubenswrapper[33867]: I0219 04:08:13.879938 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-sfhmd_8379917d-eee7-433f-a617-e845e9d59f16/manager/0.log" Feb 19 04:08:13.982134 master-0 kubenswrapper[33867]: I0219 04:08:13.982077 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-m22fs_8e845974-687e-4f15-961b-edf71c7dc316/manager/0.log" Feb 19 04:08:14.137624 master-0 kubenswrapper[33867]: I0219 04:08:14.137196 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-cwblm_3412b3eb-21b6-4166-9a78-b7c73f91d708/manager/0.log" Feb 19 04:08:14.186665 master-0 kubenswrapper[33867]: I0219 04:08:14.186618 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/kube-apiserver/0.log" Feb 19 04:08:14.200482 master-0 kubenswrapper[33867]: I0219 04:08:14.200391 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/kube-apiserver-cert-syncer/0.log" Feb 19 04:08:14.235498 master-0 kubenswrapper[33867]: I0219 04:08:14.235190 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/kube-apiserver-cert-regeneration-controller/0.log" Feb 19 04:08:14.255924 master-0 kubenswrapper[33867]: I0219 04:08:14.255875 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/kube-apiserver-insecure-readyz/0.log" Feb 19 04:08:14.274723 master-0 kubenswrapper[33867]: I0219 04:08:14.274334 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/kube-apiserver-check-endpoints/0.log" Feb 19 04:08:14.291549 master-0 kubenswrapper[33867]: I0219 04:08:14.291317 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_57aa038311da35c3e4d00e227853e6b4/setup/0.log" Feb 19 04:08:14.946923 master-0 kubenswrapper[33867]: I0219 04:08:14.946867 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-zgxpw_b3da7145-0056-4bed-8e77-5a257550f8da/manager/0.log" Feb 19 04:08:14.968444 master-0 kubenswrapper[33867]: I0219 04:08:14.968105 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx_e3c70606-b8cd-4216-98e7-d73c7d31b443/manager/0.log" Feb 19 04:08:14.983469 master-0 kubenswrapper[33867]: I0219 04:08:14.983383 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr/0.log" Feb 19 04:08:14.995266 master-0 kubenswrapper[33867]: I0219 04:08:14.995188 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/reloader/0.log" Feb 19 04:08:15.000366 master-0 kubenswrapper[33867]: I0219 04:08:15.000323 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/frr-metrics/0.log" Feb 19 04:08:15.012309 master-0 kubenswrapper[33867]: I0219 04:08:15.011021 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy/0.log" Feb 19 04:08:15.018247 master-0 kubenswrapper[33867]: I0219 04:08:15.018137 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/kube-rbac-proxy-frr/0.log" Feb 19 04:08:15.032489 master-0 kubenswrapper[33867]: I0219 04:08:15.024523 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-frr-files/0.log" Feb 19 04:08:15.036280 master-0 kubenswrapper[33867]: I0219 04:08:15.034213 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-reloader/0.log" Feb 19 04:08:15.045688 master-0 kubenswrapper[33867]: I0219 04:08:15.044964 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rx68_2877ad48-bf75-4a75-b6ca-8f48f0ede5df/cp-metrics/0.log" Feb 19 04:08:15.056082 master-0 kubenswrapper[33867]: I0219 04:08:15.056049 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-n7lx6_22564019-4f1e-40cb-a6d2-b6ac86a13ca1/frr-k8s-webhook-server/0.log" Feb 19 04:08:15.082053 master-0 kubenswrapper[33867]: I0219 04:08:15.081988 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d69997cd-bxnmk_48dfa1c5-695c-45aa-aca5-f01672f08790/manager/0.log" Feb 19 04:08:15.094402 master-0 kubenswrapper[33867]: I0219 04:08:15.094351 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-667b5d6768-wjdrc_becd4fad-b917-478c-83bf-0b5d0a6770f3/webhook-server/0.log" Feb 19 04:08:15.189193 master-0 kubenswrapper[33867]: I0219 04:08:15.185935 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-l9rmk_94c9ea9e-a058-4483-a058-8de6dcaa7e12/operator/0.log" Feb 19 04:08:15.341680 master-0 kubenswrapper[33867]: I0219 04:08:15.341552 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/kube-rbac-proxy/0.log" Feb 19 04:08:15.357103 master-0 kubenswrapper[33867]: I0219 04:08:15.356992 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/1.log" Feb 19 04:08:15.360237 master-0 kubenswrapper[33867]: I0219 04:08:15.360179 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhj9q_7012676e-f35d-46e5-83e8-a63172dd076e/manager/2.log" Feb 19 04:08:15.592961 master-0 kubenswrapper[33867]: I0219 04:08:15.592662 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/speaker/0.log" Feb 19 04:08:15.598769 master-0 kubenswrapper[33867]: I0219 04:08:15.598377 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-psdfl_ce9b802d-6caa-4b6e-9d4d-72b056257685/kube-rbac-proxy/0.log" Feb 19 04:08:16.064706 master-0 kubenswrapper[33867]: I0219 04:08:16.064624 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-kgvls_a046d5fd-383b-4769-9912-a8ed83bf66a7/manager/0.log" Feb 19 04:08:16.101035 master-0 kubenswrapper[33867]: I0219 04:08:16.099285 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x5zf7_803ccd3e-034b-4152-b2b5-2bf947bd84f0/registry-server/0.log" Feb 19 04:08:16.156278 master-0 kubenswrapper[33867]: I0219 04:08:16.154847 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-hv28k_3d0c427a-ffc4-4bea-a695-f1c50efb4c79/manager/0.log" Feb 19 04:08:16.181280 master-0 kubenswrapper[33867]: I0219 04:08:16.180749 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-67lp8_e2e7ed89-284a-4147-bcad-ec2520b9c64c/manager/0.log" Feb 19 04:08:16.197859 master-0 kubenswrapper[33867]: I0219 04:08:16.197805 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t465n_eb5820e0-2241-4fc0-a7ae-e2eb51b08653/operator/0.log" Feb 19 04:08:16.218970 master-0 kubenswrapper[33867]: I0219 04:08:16.218909 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-hqd26_e9dacca4-e34c-4b78-97e3-c12b06b3738b/manager/0.log" Feb 19 04:08:16.229967 master-0 kubenswrapper[33867]: I0219 04:08:16.229924 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-bzt8g_116294f4-67e6-4af1-a23f-29012eeb2090/manager/0.log" Feb 19 04:08:16.237747 master-0 kubenswrapper[33867]: I0219 04:08:16.236790 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-dxk94_bd663e7b-0774-48b5-bf36-9b28f553c2f8/manager/0.log" Feb 19 04:08:16.247300 master-0 kubenswrapper[33867]: I0219 04:08:16.246846 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-k82hk_6776ed22-9e69-4556-b092-fc78542efe4a/manager/0.log" Feb 19 04:08:16.294376 master-0 kubenswrapper[33867]: I0219 04:08:16.292328 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-zsfln_37c694d5-497d-4aca-8e88-9ee5c9a7bcce/cert-manager-controller/0.log" Feb 19 04:08:16.317104 master-0 kubenswrapper[33867]: I0219 04:08:16.316974 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-tsxfz_50fab54f-3c0d-40ac-a0e3-c6a413e099de/cert-manager-cainjector/0.log" Feb 19 04:08:16.338286 master-0 kubenswrapper[33867]: I0219 04:08:16.338218 33867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-mcjb2_f5f24426-ab21-4736-9f97-71ec47becd17/cert-manager-webhook/0.log"